From Crypto to AI: Bridging Trust & Safety in Disruptive Technologies
The emergence of new transformative technologies is almost always accompanied by exuberance, skepticism, and safety issues. When I began my compliance career at Coinbase nearly a decade ago, we were building fraud prevention and compliance systems that simply never existed before. As I reflect on that time and compare it with the current state of AI, I can’t help but notice the similarities between the Trust & Safety issues in the early days of cryptocurrency and the challenges that generative AI faces today.
Crypto Compliance Challenges in the Early Days
In the early days of cryptocurrency, the pseudonymous and decentralized nature of the technology presented real challenges for regulators, financial institutions, and law enforcement. Some of the big concerns were:
- Anti-Money Laundering (AML) and Counter-Terrorist Financing (CTF): The use of cryptocurrencies for illicit activities and financing terrorism raised alarm bells among authorities. To combat this, regulators around the world began enforcing AML and CTF policies that required exchanges to implement traditional compliance controls such as CIP (Customer Information Programs) and SAR (“Suspicious Activity Report”) programs.
- Tax Evasion: The anonymity and global nature of cryptocurrencies made it difficult for tax authorities to track transactions and collect taxes. As a result, many governments began to impose stringent tax reporting requirements on cryptocurrency exchanges and users. In 2017, the IRS ordered Coinbase to provide customer information and transaction records for anyone that moved $20k or more on the platform between 2013–2015; Coinbase fought back with some success.
- Security and Fraud: The lack of clear regulations and standard practices led to numerous cases of hacks, scams, and frauds in the cryptocurrency space. An estimated $3.7 billion was lost to scams and hacks in 2022, in part due to the record-breaking $620 million Ronin heist. This prompted regulatory authorities to establish guidelines and licensing requirements for crypto exchanges; earlier this year, California launched a public crypto scam tracker.
AI Trust & Safety Challenges
Like the early days of cryptocurrency, AI is faced with several Trust and Safety challenges that need to be addressed. These include:
- Misinformation and Fake Content: The ability of AI models like GPT-4 to rapidly produce human-like text can lead to the creation of fake news, deepfakes, and other forms of misinformation. This has raised concerns about the potential of AI-generated content to influence public opinion, manipulate markets, or even incite violence.
- Intellectual Property Theft: AI has the potential to produce creative works such as articles, songs, or artwork that closely resemble existing copyrighted material. This poses a significant challenge for copyright holders and creators as it becomes increasingly difficult to identify and protect their original content.
- Unintended Bias and Discrimination: AI models are trained on large datasets and can inadvertently learn and perpetuate biases present in the data. This can lead to AI-generated content that is offensive, discriminatory, or perpetuates harmful stereotypes. Remember Tay, Microsoft’s chatbot that was trained on (and corrupted by) Twitter data in 2016? Tay lasted a mere 16 hours before Microsoft had to pull the plug after Twitter users exploited a training vulnerability and caused Tay to start spewing highly inappropriate, racist, and misogynistic tweets.
Addressing Compliance and Trust & Safety Challenges in Crypto & AI
While the challenges of crypto compliance and trust & safety in AI differ in nature, they share some common ground. Both require regulatory clarity and a balanced approach that fosters innovation while protecting users and upholding societal standards. Some solutions include:
- Clear Regulatory Authority: In the early days of crypto (and still very much today) there was a severe lack of clarity as to which regulatory body had authority over various crypto assets and which rules applied. The safe deployment of large-scale AI models urgently requires a clear regulatory authority and framework.
- Collaboration: Public and private stakeholders need to work together to develop standards and best practices that address the challenges posed by crypto and AI. This collaboration should include industry leaders, regulators, academia, and civil society organizations.
- Transparency: Just as (most) crypto companies are transparent about the risks of crypto, AI companies should be transparent about the limitations, biases, and potential risks associated with their technology. This will enable users to make informed decisions and foster trust in the AI ecosystem. Some visibility into AI model training data should also be made public.
- Education and Awareness: Ensuring that users understand the capabilities and limitations of AI can help prevent the misuse of the technology. In the same way that crypto taught its users about 2FA and how to identify phishing & frauds/scams, increased awareness about the potential risks associated with AI-generated content will enable users to identify and report malicious activities.
The early days of cryptocurrency taught us valuable lessons about the importance of addressing compliance and regulatory challenges early on in emerging technologies. As generative AI continues to advance, it is crucial for stakeholders to work together to establish regulatory authority, user trust, and to ensure the technology’s safe and responsible use. By learning from the past, we can pave the way for a more secure and innovative future.