Dean’s Speaker Series Hosts Brazil’s Tech Diplomat, an AI Expert

The international community’s approach to the governance of artificial intelligence (AI) was the subject of the second speaker in the MSU Law Dean’s Speaker Series on Artificial Intelligence, Law, and Society. Dr. Eugenio V. Garcia, Tech Diplomat and Deputy Consul General of Brazil in San Francisco, and MSU Law Dean Linda Sheryl Greene discussed the global response to the growth of AI products and services worldwide.


Dean Greene and Dr. Garcia

As Head of Science, Technology, and Innovation and an academic researcher on AI and global governance, Dr. Garcia shared his career path from a young chess player to his study of international relations and history. His interest in technology piqued while he headed the United Nations division of the Ministry of Foreign Affairs back in 2016. He became more involved in the discussion on international humanitarian law and autonomous weapons. Then, he started doing research on peace and security and implications of artificial intelligence in conflict. His research and “discussions on AI in general, AI ethics, AI governance and technology” led him to San Francisco and Silicon Valley two years ago, where he now serves as Deputy Consul General.

Dr. Garcia described AI as a “cross border” technology. “Right now, we have some groups like the Tech Diplomacy Network, which is open to all the countries from Europe, North America, Latin American, the Caribbean, Africa, and Asia. The idea is to bridge the gap between technologists and policymakers because one of the biggest challenges we have in terms of governing powerful technologies is, of course, AI. If we succeed in having a proper governance of AI now into the future, this is what tech diplomacy is all about because then we can have both governments and the private sector working together to achieve these goals.”

Asked by Dean Greene to expound on the role AI is playing in the international governance context, Dr. Garcia shared two dimensions. “One is the AI tools or the applications that we as knowledge workers or diplomats are using, like ChatGPT. The other dimension is AI as a topic of negotiations. This is what AI governance means in terms of international engagement for tech diplomats. Because, of course, AI is a general-purpose technology. It’s ubiquitous because it’s like electricity, and you can find a number of applications or tools for it. People are finding new tools and applications every day for AI systems, so it’s a very transformative general-purpose technology. Then we have also the distinction between near term risks and long-term risks.”

Describing a long-term risk that many people think could be an existential risk to humanity, Dr, Garcia explained the concern that powerful AI systems can improve themselves. “There are self-learning programs. They say that if, in the case of recursive self-improvement, the program can keep improving itself and then have an intelligence explosion. This would give rise to super intelligence, which is much more than a human can expect to be in terms of measuring intelligence. Even if they say this is a theoretical perspective, many people are really worried about this and making serious studies on how to avoid this scenario, this worst-case scenario, that super intelligence would be in a position perhaps to become a danger to humanity.”


Dr. Garcia and MSU Law student

Many countries and large entities like the U.N. are employing AI for various purposes. The idea of having AI as a tool that not only provides economic efficiency and increased productivity, AI can be good! The U.N. has a program called “AI for Good.” He explained, “In terms of global health and health care, a good example is drug discovery. Because as in the case of vaccines and public health in the pandemic, AI could help to explore alternatives in a way which is much more efficient and quicker to get ahead of the viruses. A better idea is using AI for scanning literature to find correlations that were not really visible to human researchers. Even for experts, this comprehensive review would otherwise be difficult.

“It’s not only in healthcare, but also in other domains,” continued Dr. Garcia. “In science, in general life sciences, or any other scientific field that you have a very powerful tool that would scan everything that was written about a given topic. Then find correlations and points that would be useful to connect. And by doing this identify possibly new medicines. Or helping scientists to discover new drugs or new solutions for problems that experts are struggling with now. So, it’s limitless.”

Another key concern internationally is the weaponization of AI. “We can have AI tools as a decision aid. I think you have some research being done precisely to help in decision making. But when we go to the high-stakes problems, that’s when you should draw the line. We can have the aid of some AI system to give advice. But, then, the problem of automation bias, when you trust the machine and you trust too much and you don’t question the output. Say if AI is an oracle, I would ask what should I do in this situation? You put the question and then the machine will answer, ‘You can’t do x or y or whatever the automation bias is.’ When you take the answer, you are delegating to the oracle the power to decide what to do. This is bad. I think you can perhaps ask for advice, maybe, but the ultimate decision should be made by humans, even if they are the most difficult ones.” He continually emphasized the importance of human interaction and control of this technology, stating that, “Life and death decisions should not be left to machines.”

In alluding to AI advances, Dr. Garcia said that Generative AI can create text, video, audio, images and even code. “It will be virtually impossible to distinguish between fact and fiction or human made or AI-generated video, images, et cetera. This will blur the distinction between what’s real, what is fake, and imagine the implications if we just allow this to happen. We don’t have any sort of guidance of governance. In a few years, you won’t be able to say if this image is true or if it’s false. It’s not perhaps an existential threat, but it’s really a concern. It’s a threat to the way society is organized and how we deal with each other. And, this is also a problem for lawyers because, in some cases, the law is about this, building norms and institutions to organize society in a fair and equitable manner. This is the essence of law. You’ll be losing our references as a society. What is wrong? What is right? What is true and what is false? This is a very dystopian scenario.” He warns of the inability to tell the difference between real life and AI-made documents will create a storm of fake news and this misinformation is a “threat to democracy.”


Dr. Garcia, Dean Greene, Professor David Blankfein-Tabachnick

In closing, Dr. Garcia said the challenge is “how we organize this landscape with so many initiatives right now, which is good, and if there is a role for the United Nations.” We don’t have time to negotiate an international treaty. “Maybe we can find a more agile way to proceed with governance at the global level and have a mix of initiatives and try to see if they connect with each other. So that the train goes in the right direction that we want it to go in terms of safety and in terms of protecting citizens everywhere and the rights of consumers and humanity. Also, thinking near term and long-term risks at the same time, this is really challenging.”

If you missed hearing the discussion between Dr. Garcia and Dean Greene, a video of the Speaker Series may be found on the College of Law website. Next up in the Dean’s Speaker Series is the March 18, 2024, presentation by Professor Cary Coglianese. He will speak on “Regulating AI Before It Regulates Us – Implications for the Legal Profession and the Practice of Law.” To RSVP and to access the livestream link on the website.