Marko Stokic is the Head of AI at the Oasis Protocol Foundation, where he works with a team focused on developing cutting-edge AI applications integrated with blockchain technology. With a business background, Marko’s interest in crypto was sparked by Bitcoin in 2017 and deepened through his experiences during the 2018 market crash. He pursued a master’s degree and gained expertise in venture capital, concentrating on enterprise AI startups before transitioning to a decentralized identity startup, where he developed privacy-preserving solutions. At Oasis, he merges strategic insight with technical knowledge to advocate for decentralized AI and confidential computing, educating the market on Oasis’ unique capabilities and fostering partnerships that empower developers. As an engaging public speaker, Marko shares insights on the future of AI, privacy, and security at industry events, positioning Oasis as a leader in responsible AI innovation.
Long before hundreds of millions of users made ChatGPT one of the world’s most popular apps in mere weeks in 2022, we were talking about the potential for AI to make us healthier, and our lives longer.
In the 1970s, a team at Stanford developed MYCIN, one of the first AI systems designed to aid medical diagnosis. MYCIN used a knowledge base of about 600 rules to identify bacteria causing infections and recommend antibiotics.
Though it outperformed human experts in trials, MYCIN was never used in clinical practice – partly due to ethical and legal concerns around machine-led diagnosis.
Fast forward five decades, and AI is now poised to transform healthcare in ways that seemed like science fiction in the MYCIN era. Today, modern AI can teach itself to spot diseases in medical imaging just as well as a human clinician, and without lots of training data. A Harvard study on AI-assisted cancer diagnosis has shown an accuracy of 96%.
Improving diagnoses
In the UK, an AI system detected 11 signs of breast cancer that were missed by human clinicians. Two separate studies, one from Microsoft and another from Imperial College, found more breast cancer cases than radiologists. Similar results have been seen with AI detection of prostate cancer, skin cancer, and other conditions.
Our access to data has never been greater. As an example, the National Health Service in the UK — Europe’s largest employer—together has access to a body of over 65 million patients’ worth of digitized data—valued at over £9.6 billion a year ($12.3 billion).
This represents an unprecedented opportunity for AI to recognize patterns and generate insights that could radically improve diagnosis, treatment, and drug discovery.
The ability of AI to detect subtle patterns in vast datasets is one of its greatest strengths in healthcare. These systems can analyze not just medical imaging, but also genomic data, electronic health records, clinical notes, and more — spotting correlations and risk factors that might escape experienced human clinicians.
Some people might feel more comfortable with an AI agent handling their healthcare data than a human not directly involved in their care. But the issue isn’t just about who sees the data—it’s about how portable it becomes.
AI models built outside of trusted healthcare institutions pose new risks. While hospitals may already protect patient data, trusting external AI systems requires more robust privacy protections to prevent misuse and to ensure data stays secure.
Privacy challenges in AI healthcare
It’s worth noting that potential comes with significant privacy and ethical concerns.
Healthcare data is perhaps the most sensitive personal information that exists. It can reveal not just our medical conditions, but our behaviors, habits, and genetic predispositions.
There are valid fears that widespread adoption of AI in healthcare could lead to privacy violations, data breaches, or misuse of intimate personal information.
Even anonymized data isn’t automatically safe. Advanced AI models have shown an alarming ability to de-anonymize protected datasets by cross-referencing with other information. There’s also the risk of “model inversion” attacks, where malicious actors can potentially reconstruct private training data by repeatedly querying an AI model.
These concerns are not hypothetical. They represent real barriers to the adoption of AI in healthcare, potentially holding back life-saving innovations. Patients may be reluctant to share data if they don’t trust the privacy safeguards.
While standards and regulations require geographical and demographic diversity in the data that is used to train AI models, sharing data between healthcare institutions requires confidentiality, as the data, besides being highly sensitive, carries the insights of the healthcare institutions around diagnoses and treatments.
This leads to wariness on the part of the institutions in sharing data from regulatory, intellectual property, and misappropriation concerns.
The future of privacy-preserving AI
Fortunately, a new wave of privacy-preserving AI development is emerging to address these challenges. Decentralized AI approaches, like federated learning, allow AI models to be trained on distributed datasets without centralizing sensitive information.
This means hospitals and research institutions can collaborate on AI development without directly sharing patient data.
Other promising techniques include differential privacy, which adds statistical noise to data to protect individual identities, and homomorphic encryption, which allows computations to be performed on encrypted data without decrypting it.
Another intriguing development is our Runtime Off-chain Logic (ROFL) framework, which enables AI models to perform computations off-chain while maintaining verifiability. This could allow for more complex AI healthcare applications to tap into external data sources or processing power without compromising privacy or security.
Privacy-preserving technologies are still in their early stages, but they all point towards a future where we can harness the full power of AI in healthcare without sacrificing patient privacy.
We should be aiming for a world where AI can analyze your full medical history, genetic profile, and even real-time health data from wearable devices, while keeping this sensitive information encrypted and secure.
This would allow for highly personalized health insights without any single entity having access to raw patient data.
This vision of privacy-preserving AI in healthcare isn’t just about protecting individual rights—though that’s certainly important. It’s also about unlocking the full potential of AI to improve human health, and in a way that commands the respect of the patients it’s treating.
By building systems that patients and healthcare providers can trust, we can encourage greater data sharing and collaboration, leading to more powerful and accurate AI models.
The challenges are significant, but the potential rewards are immense. Privacy-preserving AI could help us detect diseases earlier, develop more effective treatments, and ultimately save countless lives and unlock a wellspring of trust.
It could also help address healthcare disparities by allowing for the development of AI models that are trained on diverse, representative datasets without compromising individual privacy.
As AI models get more advanced, and AI-driven diagnoses get quicker and more accurate, the instinct to use them will become impossible to ignore. The important thing is that we teach them to keep their secrets.