Over the last year, AI has taken the world by storm, and some have been left wondering: Is AI moments away from enslaving the human population, the latest tech fad, or something far more nuanced?\
It’s complicated. On one hand, ChatGPT was able to pass the bar exam — which is both impressive and maybe a bit ominous for lawyers. Still, some cracks in the software’s capabilities are already coming to light, such as when a lawyer used ChatGPT in court and the bot fabricated elements of their arguments.
AI will undoubtedly continue to advance in its capabilities, but there are still big questions. How do we know we can trust AI? How do we know that its output is not only correct, but free of bias and censorship? Where does the data that the AI model is being trained on come from, and how can we be assured it wasn’t manipulated?
Tampering creates high-risk scenarios for any AI model, but especially those that will soon be used for safety, transportation, defense and other areas where human lives are at stake.
While national agencies across the globe acknowledge that AI will become an integral part of our processes and systems, that doesn’t mean adoption should happen without careful focus.
The two most important questions that we need to answer are:
If we know that a model has been trained to its designed purpose, and we know exactly where it is being deployed (and what it can do), then we have eliminated a significant number of risks in AI being misused.
There are many different methods to verify AI, including hardware inspection, system inspection, sustained verification and Van Eck radiation analysis.
Hardware inspections are physical examinations of computing elements that serve to identify the presence of chips used for AI. System inspection mechanisms, by contrast, use software to analyze a model, determine what it’s able to control and flag any functions that should be off-limits.
The mechanism works by identifying and separating out a system’s quarantine zones — parts that are purposefully obfuscated to protect IP and secrets. The software instead inspects the surrounding transparent components to detect and flag any AI processing used in the system without the need to reveal any sensitive information or IP.
Sustained verification mechanisms occur after the initial inspection, ensuring that once a model is deployed, it isn’t changed or tampered with. Some anti-tamper techniques such as cryptographic hashing and code obfuscation are completed within the model itself.
Cryptographic hashing allows an inspector to detect whether the base state of a system is changed, without revealing the underlying data or code. Code obfuscation methods, still in early development, scramble the system code at the machine level so that it can’t be deciphered by outside forces.
Van Eck radiation analysis looks at the pattern of radiation emitted while a system is running. Because complex systems run a number of parallel processes, radiation is often garbled, making it difficult to pull out specific code. The Van Eck technique, however, can detect major changes (such as new AI) without deciphering any sensitive information the system’s deployers wish to keep private.
Most importantly, the data being fed into an AI model needs to be verified at the source. For example, why would an opposing military attempt to destroy your fleet of fighter jets when they can instead manipulate the training data used to train your jets’ signal processing AI model? Every AI model is trained on data — it informs how the model should interpret, analyze and take action on a new input that it is given. While there is a massive amount of technical detail to the process of training, it boils down to helping AI “understand” something the way a human would. The process is similar, and the pitfalls are, as well.
Ideally, we want our training dataset to represent the real data that will be fed to the AI model after it is trained and deployed. For instance, we could create a dataset of past employees with high performance scores and use those features to train an AI model that can predict the quality of a potential employee candidate by reviewing their resume.
In fact, Amazon did just that. The result? Objectively, the model was a massive success in doing what it was trained to do. The bad news? The data had taught the model to be sexist. The majority of high-performing employees in the dataset were male, which could lead you to two conclusions: That men perform better than women; or simply that more men were hired and it skewed the data. The AI model does not have the intelligence to consider the latter, and therefore had to assume the former, giving higher weight to the gender of a candidate.
Verifiability and transparency are key to creating safe, accurate, ethical AI. The end-user deserves to know that the AI model was trained on the right data. Utilizing zero-knowledge cryptography to prove that data hasn’t been manipulated provides assurance that AI is being trained on accurate, tamperproof datasets from the start.
Business leaders must understand, at least at a high level, what verification methods exist and how effective they are at detecting the use of AI, changes in a model and biases in the original training data. Identifying solutions is the first step. The platforms building these tools provide a critical shield for any disgruntled employee, industrial/military spy or simple human errors that can cause dangerous problems with powerful AI models.
While verification won’t solve every problem for an AI-based system, it can go a long way in ensuring that the AI model will work as intended, and that its ability to evolve unexpectedly or to be tampered with will be detected immediately. AI is becoming increasingly integrated in our daily lives, and it’s critical that we ensure we can trust it.
Scott Dykstra is cofounder and CTO for Space and Time, as well as a strategic advisor to a number of database and Web3 technology startups.
VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
Over the last year, AI has taken the world by storm, and some have been left wondering: Is AI moments away from enslaving the human population, the latest tech fad, or something far more nuanced?\
It’s complicated. On one hand, ChatGPT was able to pass the bar exam — which is both impressive and maybe a bit ominous for lawyers. Still, some cracks in the software’s capabilities are already coming to light, such as when a lawyer used ChatGPT in court and the bot fabricated elements of their arguments.
AI will undoubtedly continue to advance in its capabilities, but there are still big questions. How do we know we can trust AI? How do we know that its output is not only correct, but free of bias and censorship? Where does the data that the AI model is being trained on come from, and how can we be assured it wasn’t manipulated?
Tampering creates high-risk scenarios for any AI model, but especially those that will soon be used for safety, transportation, defense and other areas where human lives are at stake.
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
AI verification: Necessary regulation for safe AI
While national agencies across the globe acknowledge that AI will become an integral part of our processes and systems, that doesn’t mean adoption should happen without careful focus.
The two most important questions that we need to answer are:
- Is a particular system using an AI model?
- If an AI model is being used, what functions can it command/affect?
If we know that a model has been trained to its designed purpose, and we know exactly where it is being deployed (and what it can do), then we have eliminated a significant number of risks in AI being misused.
There are many different methods to verify AI, including hardware inspection, system inspection, sustained verification and Van Eck radiation analysis.
Hardware inspections are physical examinations of computing elements that serve to identify the presence of chips used for AI. System inspection mechanisms, by contrast, use software to analyze a model, determine what it’s able to control and flag any functions that should be off-limits.
The mechanism works by identifying and separating out a system’s quarantine zones — parts that are purposefully obfuscated to protect IP and secrets. The software instead inspects the surrounding transparent components to detect and flag any AI processing used in the system without the need to reveal any sensitive information or IP.
Deeper verification methods
Sustained verification mechanisms occur after the initial inspection, ensuring that once a model is deployed, it isn’t changed or tampered with. Some anti-tamper techniques such as cryptographic hashing and code obfuscation are completed within the model itself.
Cryptographic hashing allows an inspector to detect whether the base state of a system is changed, without revealing the underlying data or code. Code obfuscation methods, still in early development, scramble the system code at the machine level so that it can’t be deciphered by outside forces.
Van Eck radiation analysis looks at the pattern of radiation emitted while a system is running. Because complex systems run a number of parallel processes, radiation is often garbled, making it difficult to pull out specific code. The Van Eck technique, however, can detect major changes (such as new AI) without deciphering any sensitive information the system’s deployers wish to keep private.
Training data: Avoiding GIGO (garbage in, garbage out)
Most importantly, the data being fed into an AI model needs to be verified at the source. For example, why would an opposing military attempt to destroy your fleet of fighter jets when they can instead manipulate the training data used to train your jets’ signal processing AI model? Every AI model is trained on data — it informs how the model should interpret, analyze and take action on a new input that it is given. While there is a massive amount of technical detail to the process of training, it boils down to helping AI “understand” something the way a human would. The process is similar, and the pitfalls are, as well.
Ideally, we want our training dataset to represent the real data that will be fed to the AI model after it is trained and deployed. For instance, we could create a dataset of past employees with high performance scores and use those features to train an AI model that can predict the quality of a potential employee candidate by reviewing their resume.
In fact, Amazon did just that. The result? Objectively, the model was a massive success in doing what it was trained to do. The bad news? The data had taught the model to be sexist. The majority of high-performing employees in the dataset were male, which could lead you to two conclusions: That men perform better than women; or simply that more men were hired and it skewed the data. The AI model does not have the intelligence to consider the latter, and therefore had to assume the former, giving higher weight to the gender of a candidate.
Verifiability and transparency are key to creating safe, accurate, ethical AI. The end-user deserves to know that the AI model was trained on the right data. Utilizing zero-knowledge cryptography to prove that data hasn’t been manipulated provides assurance that AI is being trained on accurate, tamperproof datasets from the start.
Looking ahead
Business leaders must understand, at least at a high level, what verification methods exist and how effective they are at detecting the use of AI, changes in a model and biases in the original training data. Identifying solutions is the first step. The platforms building these tools provide a critical shield for any disgruntled employee, industrial/military spy or simple human errors that can cause dangerous problems with powerful AI models.
While verification won’t solve every problem for an AI-based system, it can go a long way in ensuring that the AI model will work as intended, and that its ability to evolve unexpectedly or to be tampered with will be detected immediately. AI is becoming increasingly integrated in our daily lives, and it’s critical that we ensure we can trust it.
Scott Dykstra is cofounder and CTO for Space and Time, as well as a strategic advisor to a number of database and Web3 technology startups.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Author: Scott Dykstra, Space and Time
Source: Venturebeat
Reviewed By: Editorial Team