As artificial intelligence (AI) grows more important to governments and the public alike, the ‘AI arms race’ has begun. Vijay Karunamurthy, field Chief Technology Officer (CTO) at Scale AI, digs into the competition between nations in terms of AI talent, legislative hurdles and AI as a national security issue.
Vijay Karunamurthy, Field CTO of pioneering start-up Scale AI, is concerned about one key obstacle to AI: the lack of human talent.
“We call it the ‘war for AI talent’. It’s a war in the sense that AI talent is now incredibly rare to find — you know, people with experience working with generative AI in an academic setting or at a large research lab,” he told OPTO Sessions last week.
“You may find there are only 1,000 or 10,000 people on the entire planet who really have a lot of experience working in this particular area and are really qualified to do this work. There’s a heated competition to hire the right folks to build the future of these AI capabilities.”
In a year or so, he believes, we’ll see multimodal AI capabilities that humans can use from beginning to end. We’re just not there yet.
Fortunately, “in the US and the UK, we’re finding really talented engineers and machine learning researchers. We’re bringing them together in new and interesting organisational structures in order to build these systems”.
What’s happening elsewhere as global competition heats up? “We’ve seen in other countries like China they also have really rich pools of talent, and in some ways, they’ve been racing ahead,” said Karunamurthy.
How Governments Are Backing AI
Government support and conversations around crucial issues like security and regulations will be key to AI's future.
“We’re constantly mindful that this is an incredibly powerful technology that comes with certain risks,” said Karunamurthy, highlighting the value of “racing ahead” and beating the global competition.
The challenge there, however, is that it “heavily relies on the local administration or government being relatively receptive to these ideas and working with the technology”.
The UK government has been receptive to AI, and Scale AI is about to open its first international office in London. “That’s going to be an important focus point for us for a couple of reasons.
“One is there’s a lot of important AI research happening in the UK,” said Karunamurthy, pointing out that Google’s [GOOGL] DeepMind is headquartered there.
“The second is the UK has been a really important leader in thinking about AI safety at a higher level. We participated in the Bletchley Park AI Safety Summit last year, which brought to light a lot of important AI safety considerations.
“We’ve been fortunate to partner with the UK government in some of those initiatives, specifically the testing and evaluation of these models for safety reasons.”
Over in the US, Scale AI has been awarded a contract with the Department of Defense (DoD) and has been working closely with the public sector.
“Some of our public work with the DoD is really grounded in this idea that AI can be a transformative technology for public sector use cases.”
Scale AI primarily works with the data and analytics team within the DoD to help them understand how AI is being adopted across a range of use cases.
“So the DoD might think about AI as a way to help solve supply chain challenges, like getting equipment to the right personnel at the right time, or to solve paperwork challenges.”
The starting point is always testing the systems and models that are solving problems. “We spend a lot of time building the test and evaluation framework that the government can use to understand how these models perform.”
Scale AI recently launched Donovan, an “AI Digital Staff Officer for national security”, according to its website.
“Folks in government can use it to answer these sorts of questions. It incorporates what we call an agent-based workflow, which means it doesn’t just answer a question like a large language model — it doesn’t just reply with a block of text. You can ask Donovan to use various tools to answer a question. Some of those tools could be using Python code; it could be querying a data store that you have available.
“Some of the tools could actually be helping the human understand what they’re seeing better.”
Why Security is Key
People generally have two main concerns about adopting AI, according to Karunamurthy.
“The first is just performance: they’re concerned that the models might give inaccurate answers or answers that are misleading.”
The second one is possibly more significant: security. “If you talk to large enterprises, almost 30% of them have already had at least one security incident related to generative AI, whether it’s through adding a model in a way that wasn’t secure or whether it was leaking data internally within an organisation.
“I think in order to overcome those security considerations, we have to find better ways of building AI systems that have the right guardrails in place. And we’re spending a lot of time in that part of the equation today.”
Disclaimer Past performance is not a reliable indicator of future results.
CMC Markets is an execution-only service provider. The material (whether or not it states any opinions) is for general information purposes only, and does not take into account your personal circumstances or objectives. Nothing in this material is (or should be considered to be) financial, investment or other advice on which reliance should be placed. No opinion given in the material constitutes a recommendation by CMC Markets or the author that any particular investment, security, transaction or investment strategy is suitable for any specific person.
The material has not been prepared in accordance with legal requirements designed to promote the independence of investment research. Although we are not specifically prevented from dealing before providing this material, we do not seek to take advantage of the material prior to its dissemination.
CMC Markets does not endorse or offer opinion on the trading strategies used by the author. Their trading strategies do not guarantee any return and CMC Markets shall not be held responsible for any loss that you may incur, either directly or indirectly, arising from any investment based on any information contained herein.
*Tax treatment depends on individual circumstances and can change or may differ in a jurisdiction other than the UK.
Continue reading for FREE
- Includes free newsletter updates, unsubscribe anytime. Privacy policy