No topic in 2023 has captured the public’s attention like artificial intelligence (AI) and machine learning models. With large language models (LLMs) and AI art generators now widely available, Canadians are equally intrigued and concerned by the possibilities. As with any new technology, this anxiety is natural. But when appropriately developed, AI can be a game-changer in improving how we live, work, and play.

Shek Azizi Portrait

Shek Azizi,
Senior Research Scientist; Google DeepMind

Vincent Dumoulin Portrait

Vincent Dumoulin,
Senior Research Scientist; Google DeepMind

Without even thinking about it, most of us already use some form of AI every day. The face recognition that unlocks our phones, digital map navigation, word suggestions on texting platforms, and personalized recommendations on streaming services all employ some degree of artificial intelligence, which users happily accept since it makes day-to-day life easier. AI has the potential to unlock scientific discoveries, and tackle humanity’s greatest challenges and opportunities. At the same time, it is still an emerging technology and needs to be developed responsibly.

Canada is uniquely positioned to be leading the way in AI innovation, thanks to its strong academic institutions, rich local talent, startups, and research facilities like Mila in Montreal or Toronto’s Vector Institute. Companies at the forefront of this AI shift, like Google, have long recognized Canada as a place to invest. Google employs many forward-thinking research scientists, particularly through laboratories like Google DeepMind which has teams in Montreal and Toronto. Such labs allow Canadian researchers to dig deeply into the potential of AI by addressing some of society’s biggest challenges.

Revolutionizing medicine

One of the most exciting innovations to emerge from Google DeepMind is Med-PaLM, a LLM designed to provide high-quality answers to medical questions. LLMs are programs trained to understand and answer questions in a human-like manner, such as ChatGPT and Google’s Bard. Tech companies have been working on building LLMs to answer medical questions for years. But complications from collecting data, employing medical-specific vocabulary, and accuracy—which could be harmful or even fatal in the medical realm—have caused significant challenges for researchers.

Med-PaLM has proven to be a real breakthrough in this area. Shek Azizi, senior research scientist at Google DeepMind, and her team use the US Medical License Exam (USMLE) style questions as a tool to measure Med-PaLM’s effectiveness. Google’s first version of Med-PaLM, completed in December 2022, was the first AI system to obtain a passing score (over 60 percent) on these questions. This model not only answered multiple choice and open-ended questions accurately, but also provided rationale and evaluated its own responses. The next iteration, Med-PaLM 2, consistently performed at an “expert” doctor level on medical exam questions, scoring 85 percent. “MedPaLM is designed to provide accurate and alternative answers to medical questions,” Azizi says. “This tech can accelerate the translation of AI solutions and have a clinical impact in a safe and responsible way to improve the lives of billions of people.”

Don’t expect a Med-PaLM-enabled robot to replace your GP. Doctors will likely use Med-PaLM to summarize their notes more efficiently or double-check diagnoses, providing much-needed efficiency as Canada continues to experience a shortage of physicians with many health care professionals feeling overworked and burnout.

“The models can augment the work of professionals in this space,” Azizi says. “They can be used to improve the clinical workflow in a way that gives humans more time to advance things that matter and provide patient care instead of spending time on tasks that can be automated through AI systems.”

Revolutionizing conservation

While streamlining health care systems feels like a natural application for AI, scientists can also use these technologies to solve a wide range of societal issues—some you might not expect. Canadian Google DeepMind research scientist, Vincent Dumoulin, and his team are working on a project called Perch, which uses bioacoustics technology to identify bird songs in audio files. When an audio clip is fed through Perch, the technology identifies the location of the bird species where the clip was recorded. This, in turn, can be used to protect endangered species or to get a better picture of a region’s bird population.

“We need to develop AI systems in a way that maximizes the positive benefit to society.”

Shek Azizi

The function may seem simple, but the science behind training AI to identify bird songs, many of which might not be in the foreground of the recording, is very complex.

“We will feel that we have succeeded if a conservationist can take these models and repurpose them for a problem we hadn’t even anticipated,” Dumoulin says. “Our philosophy is not necessarily to go out into the world and deploy those models directly, but to provide tools that will allow conservationists to effect change.”

Both Azizi and Dumoulin are clear about that role: organizations like Google act to enable humans with tools that facilitate the work of making the world a better place, be it through medical advancements, environmental protection, or simply granting humans more leisure time. That said, both scientists and Google recognize the need to consider ethical responsibilities in any research and product development.

Maintaining ethics

For a technology like Med-PaLM, those responsibilities extend to ensuring accuracy, and addressing privacy concerns and biases in the collection of medical data used to train the LLM. Responsibility is a guiding principle in all of Google’s AI work, which is why the company developed AI principles in 2018. The principles are a set of concrete standards that will actively govern their research and product development, and impact their business decisions.

“It is crucial for society and community to practise AI responsibly,” Azizi says. “We need to develop AI systems in a way that maximizes the positive benefit to society while also being aware of the challenges and addressing them actively.”

Dumoulin agrees. While he believes in the immense positive potential of AI, he’s also careful to diffuse the public’s tendency towards anthropomorphizing it or imagining the technology as a human brain making decisions. It is, rather, a series of programs and applications designed by humans to address specific tasks. People are guiding the AI, not the other way around. Like any other tool, from the automobile to the invention of the internet, it’s our duty to use this technology for good.

“With these innovations, it’s up to us collectively to make sure the benefits are distributed to everyone,” Dumoulin says. “That’s something that transcends AI and machine learning.”


Elizabeth Chorney-Booth