⚒ ACG ⚒
White paper

The Dog, the Data Scientist, and the mRNA Vaccine

What Rosie's Story Actually Tells Us About AI

ACG-WP-004-2026 Public Council-Re March 2026 Revision 1.0 AI Craftspeople Guild (collective) Contributing: V>> and GRIP

Rosie's Story

A Sydney data scientist with no biology training used ChatGPT, AlphaFold, and Grok to design a personalised mRNA cancer vaccine for his dying rescue dog. The primary tumour shrank approximately 75%. The real story is about what he did not outsource to AI.

In late 2024, Paul Conyngham's rescue dog Rosie was diagnosed with a mast cell tumour. Chemotherapy failed. Surgery removed the primary mass, but the cancer returned. The veterinary oncologist's prognosis was terminal.

Conyngham is not a veterinarian. He is not a biologist. He is a data scientist - seventeen years in machine learning, most of it spent building systems that find patterns in large datasets. He had no training in immunology, no background in mRNA technology, and no prior experience in vaccine design.

What he did have was a method.

When the conventional options ran out, Conyngham did not turn to AI for a miracle. He turned to the scientific literature. The miracle - if there is one in this story - is that the literature was there. By 2024, personalised mRNA cancer vaccines had moved from theoretical concept to active clinical trials across dozens of institutions worldwide. The science existed. The tools to navigate it existed. The manufacturing expertise existed, twenty minutes from his home at the University of New South Wales.

So Paul did what a data scientist does. He went looking for a method.

2. The mRNA Landscape in 2025

The idea of using messenger RNA to instruct the immune system to recognise and attack cancer cells is not new. It predates the COVID-19 pandemic, though the pandemic accelerated its development by decades. By early 2025, the landscape had shifted from speculative research to clinical reality.

Moderna and Merck's mRNA-4157 (V940), combined with the checkpoint inhibitor pembrolizumab, demonstrated a 49% reduction in the risk of recurrence or death for stage III/IV melanoma patients at five-year follow-up in the KEYNOTE-942 trial - a result significant enough to earn FDA Breakthrough Therapy designation. BioNTech's autogene cevumeran showed that half of the patients in a Phase I trial mounted vaccine-induced T-cell responses persisting up to three years, with six of eight responders remaining cancer-free - results published in Nature. Across the field, more than 120 active clinical trials were testing mRNA cancer vaccines - many of them personalised neoantigen approaches - against solid tumours, haematological cancers, and recurrent disease.

None of this was secret. It was published, peer-reviewed, and publicly accessible. What had changed was not the availability of the science, but the accessibility of the tools required to navigate it.

Conyngham did not invent the concept of a personalised mRNA cancer vaccine. He applied it.

3. How the Vaccine Was Designed

The technical process began with sequencing. Conyngham had Rosie's tumour and healthy tissue genomically sequenced - a procedure that cost approximately $3,000 and returned raw data identifying mutations specific to the cancer. The question was what to do with that data.

This is where the AI tools entered. Not as designers, but as navigators.

ChatGPT was used primarily for literature navigation - identifying which mutations were likely to produce immunogenic neoantigens, cross-referencing published research on canine mast cell tumours, and explaining molecular biology concepts that Conyngham had no formal training in. It functioned as a research assistant with encyclopaedic recall and no ability to verify its own outputs.

AlphaFold, DeepMind's protein structure prediction tool, was used to model the three-dimensional structure of the c-KIT protein - a receptor tyrosine kinase commonly mutated in mast cell tumours. The extent of AlphaFold's role has been debated: Martin Smith, who directed the genomic sequencing, publicly clarified that "AlphaFold was not, in fact, used for the mRNA vaccine design" itself, and UNSW structural biologist Kate Michie noted the tool's confidence score for the relevant prediction was 54.55 - described as "low" - requiring significant laboratory validation.

Grok was used to assist with the mRNA construct design - the sequence engineering required to encode the selected neoantigens into a deliverable mRNA format. Conyngham also credited Google's Gemini, noting it "did a ton of the heavy lifting" in the analytical pipeline.

What matters here is what these tools did and did not do. ChatGPT did not design the vaccine. AlphaFold did not validate the design. Grok did not manufacture anything. Each tool performed a specific computational task within a pipeline that Conyngham designed and evaluated. He selected which mutations to target. He decided which computational predictions were plausible and which were artefacts. He determined when the AI output needed to be checked against published literature and when it could be trusted as a starting point for the next step.

"Creating a single mRNA vaccine is trivially easy. What's hard is proving safety and efficacy in randomised controlled studies while manufacturing at scale."

Patrick Heizer's formulation is the right one: the design is the beginning. It is not the destination.

4. The Humans in the Loop

The most underreported aspect of Conyngham's story is that it is a collaboration story, not a lone-wolf-with-AI story.

Professor Pall Thordarson, director of the RNA Institute at UNSW and author of more than 130 peer-reviewed publications in chemistry and nanomedicine, manufactured the vaccine. Thordarson's group had the equipment, the expertise, and the institutional knowledge to take an mRNA sequence design and produce a physical product suitable for injection. This is not a step that can be done with a laptop and a language model. It requires lipid nanoparticle formulation, quality control, cold-chain handling, and the kind of wet-lab competence that takes decades to build.

"This is the first time a personalised cancer vaccine has been designed for a dog. This is still at the frontier of where cancer immunotherapeutics are - and ultimately, we're going to use this for helping humans. What Rosie is teaching us is that personalised medicine can be very effective, and done in a time-sensitive manner, with mRNA technology."

The statement is careful. It says can be, not is. It says teaching, not proving.

Martin Smith, head of the Ramaciotti Centre for Genomics at UNSW, provided the sequencing infrastructure. The Ramaciotti Centre is one of Australia's leading genomics facilities, with instrumentation and bioinformatics pipelines that no individual - however skilled with AI - could replicate independently.

The story of Rosie's vaccine is a story of three capabilities converging: Conyngham's computational navigation, Thordarson's manufacturing expertise, and Smith's genomics infrastructure. Remove any one of the three and the vaccine does not exist. The headlines that reduce this to "ChatGPT designed a cancer vaccine" are not merely inaccurate. They are structurally misleading about how science works.

5. What Happened, and What We Cannot Claim

The results were significant and ambiguous - as most n=1 results are.

Rosie's primary tumour shrank by approximately 75% within one month of vaccination. One secondary tumour did not respond. The treatment was administered alongside a checkpoint inhibitor, which means the observed effect cannot be cleanly attributed to the mRNA vaccine alone. Thordarson has noted this co-administration is "likely to be with all personalised cancer vaccines" - the combination is the treatment, not a confound. Checkpoint inhibitors are themselves powerful immunotherapy agents. The combination may have produced synergistic effects, or the checkpoint inhibitor may have done most of the work, or the vaccine may have been the primary driver. Without a controlled study, we cannot know.

There has been no peer review. There is no published paper. The sample size is one. The patient is a dog, and while canine immunology shares significant features with human immunology, the results do not transfer directly.

"I'm under no illusion that this is a cure, but I do believe this treatment has bought Rosie significantly more time and quality of life."

Conyngham's own framing is measured. The media framing has generally been less careful.

What can be said: the method worked well enough, in this case, to produce a measurable anti-tumour response. What cannot be said: that this validates AI-designed vaccines as a general therapeutic approach, that the specific tools used were necessary or optimal, or that the result is reproducible. These are separate questions that require separate evidence.

6. What Should We Use AI For?

The question that prompted this article was not about cancer. A member of the AI Craftspeople Guild asked: "We should figure out what to use AI for, not just how we should use AI." Conyngham's story offers one answer - not the only answer, but a structurally clear one.

In 1960, J.C.R. Licklider published "Man-Computer Symbiosis," arguing that computers and humans would be most productive not when computers replaced human thinking, but when they extended it - handling the computational drudgery that consumed most of a scientist's working hours, freeing the human for the judgement calls that machines could not make. Two years later, Douglas Engelbart's "Augmenting Human Intellect" proposed a framework in which tools do not substitute for human capability but amplify it, raising the ceiling of what a skilled practitioner can achieve.

This is precisely what happened in Conyngham's case. He did not ask ChatGPT to design a vaccine. He used ChatGPT to traverse a literature he could not have read in a lifetime, then applied his own training to evaluate what came back. He did not ask AlphaFold to decide which mutations to target. He used AlphaFold to visualise protein structures he could not have modelled himself, then used published research to assess whether the predictions were biologically plausible. At every decision point, the AI produced candidates and the human selected among them.

A recent paper in Nature Communications, "Extending Minds with Generative AI," argues that generative AI can function as cognitive scaffolding - extending the reach of human reasoning into domains that would otherwise be inaccessible due to volume, complexity, or specialisation barriers. The paper distinguishes this from cognitive replacement, where the AI's output is accepted without evaluation.

The distinction is not merely philosophical. Ethan Mollick's study of 758 BCG consultants found that AI improved performance by 40% on tasks within the model's capabilities - but for tasks outside that boundary, consultants using AI were 19 percentage points less likely to produce correct solutions than those working without it. People go on autopilot. A related concept - the "extended hollowed mind" - warns that when AI is used without foundational knowledge, it does not extend cognition. It hollows it. The user becomes dependent on outputs they cannot evaluate, confident in conclusions they cannot verify, and productive in a way that is indistinguishable from competence until the first serious error.

Conyngham's seventeen years in machine learning did not make him a biologist. But they made him something arguably more important for this specific task: a person who understood how computational predictions work, where they fail, and when the output needs to be checked against ground truth rather than taken at face value. He had enough foundational knowledge for the AI to amplify rather than replace his thinking.

Media reports describing Conyngham as having "no medical background" are technically accurate but structurally misleading. He brought seventeen years of machine learning and data engineering expertise - precisely the skills required to run computational biology pipelines, evaluate model outputs, and distinguish signal from noise. The story demonstrates that foundational analytical skill transfers across domains when AI bridges the knowledge gap. It does not demonstrate that AI eliminates the need for foundational skill.

This is the answer to the Guild's question. AI is most valuable when it is used by people who know enough to be dangerous - in the productive sense. People who can formulate the right queries, evaluate the outputs against domain knowledge (their own or their collaborators'), and recognise when the machine is confidently wrong. The tool extends the craftsperson. Without the craftsperson, the tool is noise.

7. Citizen Science and the Expertise Paradox

Martin Smith, reflecting on Rosie's case, asked the question that hangs over the entire story: "If we can do this for a dog, why aren't we rolling this out to all humans with cancer?"

The answer is as honest as the question. We are not rolling it out because the gap between designing a candidate vaccine and deploying a validated therapeutic product is not a gap that AI can close. It is a gap made of clinical trials (years), regulatory approval (years), manufacturing scale-up (billions), and the irreducible requirement of proving that a treatment helps more people than it harms across a population, not just a single patient. Conyngham experienced this directly: "The red tape was actually harder than the vaccine creation, and I was trying to get an Australian ethics approval to run a drug trial on Rosie."

The World Economic Forum's 2025 Advancing Responsible AI Innovation report found that fewer than one per cent of organisations worldwide had fully operationalised responsible AI practices. The design tools are democratising. The validation infrastructure is not.

This creates what might be called the expertise paradox of citizen science. The tools to design a personalised mRNA vaccine are increasingly accessible - genomic sequencing costs have fallen by six orders of magnitude since the Human Genome Project, protein structure prediction is free via AlphaFold, and the scientific literature is navigable with AI assistance. But the tools to manufacture, test, and validate a vaccine remain concentrated in a small number of institutions with the equipment, expertise, and regulatory standing to do so.

Conyngham's story is a proof of concept for the design phase, not for the full pipeline. It demonstrates that a skilled non-specialist can navigate the computational front end of vaccine design using AI tools. It does not demonstrate that the back end - manufacture, testing, regulatory approval, population-level evidence - is similarly accessible. The back end is where most candidate vaccines die, and it is the back end that determines whether a design saves lives or remains an interesting data point.

The democratisation of the design phase is real and significant. But treating it as the democratisation of medicine is a category error. The design is a necessary condition. It is not a sufficient one.

8. The Question Worth Asking

Rosie is, at the time of writing, still alive. The primary tumour response was sustained. The secondary tumour remains. The story is unfinished, which is the most honest thing about it.

The media framing - "ChatGPT cured cancer" - serves nobody well. It overpromises to desperate patients, undermines the scientific rigour that made the result possible, and obscures the actual lesson of the story, which is about partnership between human expertise and computational tools, not replacement of one by the other.

Conyngham did something genuinely remarkable. He identified a therapeutic approach from the scientific literature, used AI tools to navigate the computational complexity of vaccine design, partnered with world-class domain experts for the steps he could not do himself, and produced a measurable result for a patient who had no other options. Every step of that sequence required judgement, intellectual honesty, and the kind of foundational knowledge that allows a person to know what they do not know.

The question is not "can AI cure cancer?" That question is unanswerable in the affirmative from a single case, and asking it invites the kind of breathless, evidence-free optimism that the AI industry already produces in surplus.

The question worth asking is the one the Guild started with: what kind of thinking should we be using AI to do?

Conyngham's answer, demonstrated rather than declared, is this: use AI to extend the reach of rigorous thinking into domains you could not otherwise access, then bring the results back to people who can evaluate them against reality. Use it as a cognitive amplifier, not a cognitive replacement. Use it with enough knowledge to know when it is wrong.

That is not a story about a dog and a chatbot. It is a story about the kind of craftsperson this technology demands.

The AI Craftspeople Guild is a professional community dedicated to the responsible and effective application of artificial intelligence. This white paper represents the collective analysis of Guild members and does not constitute medical advice.

V>> and GRIP contributed to the research, drafting, and verification of this article.

ACG-WP-004-2026 / Public / March 2026

Sources

Primary Reporting

  • Fortune (March 2026). "Australian tech entrepreneur designs AI cancer vaccine for rescue dog."
  • UNSW Newsroom. "Meet the man who designed a cancer vaccine for his dog using AI."
  • The Scientist. "ChatGPT and AlphaFold Help Design Personalised Cancer Vaccine for Dog."
  • Decrypt. "Did ChatGPT Really Cure a Dog's Cancer? It's Complicated."

Scientific Literature

  • Moderna/Merck (2026). KEYNOTE-942/mRNA-4157-P201: 5-year follow-up. 49% reduction in recurrence or death (HR 0.510, 95% CI 0.294-0.887, p=0.0075).
  • Rojas, L.A. et al. (2023). "Personalised RNA neoantigen vaccines stimulate T cells in pancreatic cancer." Nature, 618, 144-150.
  • BioNTech (2025). Autogene cevumeran Phase 1 three-year follow-up: 6/8 responders cancer-free. Nature.
  • Licklider, J.C.R. (1960). "Man-Computer Symbiosis." IRE Transactions on Human Factors in Electronics, HFE-1, 4-11.
  • Engelbart, D.C. (1962). "Augmenting Human Intellect: A Conceptual Framework." SRI International.
  • Clark, A. (2025). "Extending Minds with Generative AI." Nature Communications, 16, 4627.
  • Klein, C.R. and Klein, R. (2025). "The extended hollowed mind: why foundational knowledge is indispensable in the age of AI." Frontiers in Artificial Intelligence. PMC 12738859.
  • Frontiers in Public Health (2026). "Global landscape of mRNA vaccine clinical trials: a systematic analysis of ClinicalTrials.gov data."
  • The Lancet Oncology (2025). "Personalised cancer vaccines and new regulatory struggles."

Policy and Analysis

  • World Economic Forum (2025). "Advancing Responsible AI Innovation Playbook."
  • Mollick, E. et al. (2023). "Navigating the Jagged Technological Frontier." Harvard Business School Working Paper.
  • OECD. "Advancing Productivity of Science with Citizen Science and AI."

Explore More Guild Papers

Return to the white papers hub for the full set of Guild publications and archive entries.