· 15 min read
Elise Louvel — Jeevantika Lingalwar
Jeevantika Lingalwar on AI leadership, building International Women in Tech, and creating global opportunities for women in STEM.
Written in by Elise Louvel Co-Founder
“Ultimately, an AI tool’s true power comes not just from its intelligence, but from its relevance to the real world and the people it’s designed to serve.” – Stefania
Driven by an early recognition of the need for inclusive communication highlighted by language barriers, stemming from her initial university degree in foreign languages, Stefania’s journey took a pioneering turn when she introduced the first AI subject focused on AI and automatic translation at her university in Italy. Building upon this foundation, she pursued advanced studies, earning a master’s and Ph.D. in CS, programming, and NLP in France.
Fuelled by a passion for applying technology to improve communication, Stefania became a driving force in numerous enterprises across diverse sectors, spanning education, manufacturing, biotech, and even video games. In these roles, she was instrumental in initiating data governance frameworks and embedding AI ethics principles into various projects and enterprise strategies. She often drew upon UNESCO guidelines such as the Recommendation on the Ethics of Artificial Intelligence (RAI) and its methodologies like the Ethical Impact Assessment (EIA). Her expertise in integrating NLP and AI solutions significantly enhanced their products and operations.
Eager to share her wealth of experience, Stefania actively mentors other women and holds prominent positions as a Google Women Techmakers, Women in AI, and Women in Games Ambassador. In 2024 she took on the role of Executive Director for Women Techmakers Montréal, a community-led initiative inspired by Google Women Techmakers. In 2025, Stefania was elected by UNESCO as AI Expert Without Borders, a global initiative designed to support Member States in implementing UNESCO’s Recommendation on the Ethics of AI in private and public sectors.
How did you enter the tech and AI space?
I took a course in France about what was not yet called AI, but computer-assisted translation. I discovered web programming, and I had never done any programming, even if I was comfortable using a computer. And then AI was a specialization we could choose.
Was there anything that surprised you in that class? What lessons have stuck with you?
The class was truly divided in two. One half was female-dominated. That half came from linguistics. The other half was mostly men who had done maths, computer science, or similar courses.
So, I really witnessed the impact of women-men disparities. It is true that they were more efficient at learning programming languages and frameworks. However, the moment we needed to explain the language structure and how to deconstruct linguistics in a way that could be written for a computer, that is where they struggled. That’s how connections between the “two halves” that seemed separated were created. That’s part of what makes AI magical: there are so many entry points, and it drives you to consider different points of view.
You might be a computer science genius, capable of building the most intricate algorithms and groundbreaking models. But here’s the critical truth about developing AI: raw technical brilliance isn’t enough. If you don’t grasp the context of your AI tool, how and where it will be used, it simply won’t be efficient.
Imagine you’re developing an AI assistant. Is it for Italy? France? Or perhaps Canada? Even within a single language, like English, usage varies dramatically from country to country, or even state to state. What’s perfectly clear and helpful in one region could be confusing or even inappropriate in another. Without understanding these nuances, your AI risks falling flat.
Beyond cultural and geographic considerations, there’s also the technical language of the domain itself. If you don’t truly understand what your target users are talking about, their industry-specific terms, their unique challenges, their workflow, then creating a truly functional product, let alone one that sells, becomes an uphill battle.
Ultimately, an AI tool’s true power comes not just from its intelligence, but from its relevance to the real world and the people it’s designed to serve.
Stefania’s dedication to responsible AI led to her appointment as an AI Expert for UNESCO’s Women4Ethical AI program, providing a platform to further apply her expertise in fostering ethical approaches in AI system creation.
I suppose when you entered the master’s program your school expected you to be as functional as people who had a strong programming background. How did you manage to progress?
While mastering specific programming languages is essential, a deeper advantage lies in the foundational understanding of logic and algorithms. This isn’t just a technical skill, it’s a way of thinking that mirrors how we process information in human language.
Think about it: grasping the core principles of logic and algorithmic problem-solving provides a powerful framework that transcends any single programming language. It makes the transition from one language to another significantly smoother, as the underlying concepts of what you’re trying to achieve remain consistent, even if the syntax changes. It’s akin to understanding the rules of grammar, which allows you to learn and adapt to different human languages more easily.
A key personal insight has been applying programming principles to the structure of human language itself. This involves dissecting how we describe actions and outcomes, how we find the inherent logic and syntax within our everyday conversations. This practice of “conversing with the computer” through analyzing human communication has been incredibly beneficial for refining programming skills, enhancing precision in thought, and breaking down complex ideas into manageable, logical steps.
Would you say that anyone can get into AI and that diversity of background is important too?
Yes, of course. We were a few to do it already back then, and now with how AI has evolved and become more accessible, it is even more true. It is a useful answer to everyone who is scared to start, because I didn’t come from a programming background either. Of course, you will struggle on some aspects at first, but you will overcome these difficulties.
What resources did you use to learn?
In my experience, the best way to learn is to try and fail. I try, I see what happens, and I improve. As long as the computer doesn’t explode, all is fine. Since I didn’t have a strong background in mathematics and programming, even to understand machine learning I adopted a more hands-on approach from the get-go.
What is a project that really helped you learn or that helped shape your thinking?
One project that significantly shaped my thinking was a simple assignment we had for our master’s program. The task involved creating a small car that we could drag and drive around the screen.
We realised at that point that, even if now we have very smart tools to make these kinds of projects easily, you really need to grasp that programming is a step-by-step process. You have to understand that it works step by step.
The logic and algorithm classes were particularly beneficial for me, as they explained why it’s important to follow certain programming structures. I also came to realize that we often tend to overcomplicate things and push the limits of the programming language itself. Reflecting on my educational journey, I can see the shift that occurred over time. When I began my bachelor’s program, I focused more on the theoretical aspects, but by the end of my master’s, the emphasis had shifted to practical applications.
What is a mentorship moment that changed your career, and how did it shape your approach to leadership or problem-solving?
During the pandemic I found a woman who listened to me and to my dream to enter the game industry. She helped me understand how the industry works, what to do and not to do, and especially how to use my strengths to be at ease even in an unfamiliar environment, including a male-dominated environment.
What is the most valuable lesson you have learned from mentoring others, and how has it impacted your work?
I think it’s the importance of listening to others and communicating in a way that they can really receive the message. Sometimes we do a lot of good stuff, but we can’t communicate it in a way that is received by others. This is something that I take with me wherever I go.
What book, movie, or podcast has significantly shaped your thinking?
It’s extremely hard to reply to this question, hehe. I don’t really have something significant to share, maybe The Why Cafe, because it is a comforting book that lets you take time to think about yourself, how you perceive things, and how we are all different but at the same time have similar struggles.
Was there ever a time you felt like giving up? How did you push through? What kept you going?
Yes, of course. My expertise is already hard enough because technically it is between research and development, so you really need to have a strong mindset and perseverance. One struggle many colleagues, and I, have had is balancing staying up to date on the latest developments with understanding technologies in depth. There is a subtle line between having in-depth expertise that can last for years, with continued nourishment, and being the person who knows about the latest model released two minutes ago. There is a trap of fake expertise people fall into, where they learn the names of all the models but not how to use any of them.
If you had to describe AI with one metaphor, what would it be and why?
A support. When used in a way where humans are in control, it can support humans and their uniqueness and creativity by letting the machine do the boring and repetitive tasks.
In your opinion, what is a currently underappreciated AI innovation or trend that should be getting more attention?
It’s hard to reply to this one because there is more than one. I would say AI for resource-constrained environments, because it might help bring AI power where resources are scarce, and it would help the tech world be more sustainable.
What do you consider the most significant ethical challenge AI faces right now, and how should we address it?
Being transparent both in how it works and the data that it uses, and also about the environmental impact as well as the human one. Since AI has been given to the general public and it is not a research product anymore, we should have clear standards and protocols used globally. It should be transparent, explainable, and support marginalized groups that do not have a clear and powerful communication channel to raise their concerns.
What is the biggest myth about AI currently that you wish to debunk? Why?
AI is not intelligent. AI is not your friend to ask for advice about your feelings. AI is not right all the time, and we should not take for granted that the replies we receive are correct or are the only ones possible.
How do you manage data quality issues when training AI models, and what steps do you take to ensure the data is suitable for model building?
I start with data profiling and exploration to understand the data itself, then I clean the data. After that I apply feature engineering to make sure that what I want to do is aligned with the data I have and the goal I have set. Monitoring and auditing the results over time is also crucial.
That is all on the tech side. On the project side I make sure that each type of user is involved and that their needs and concerns are addressed over time. I ensure they are able to understand how to use the model as well as the limits and risks of the model.
If you were building an AI tool from scratch, which frameworks, languages, or tools would you choose? Could you explain the reasoning behind your choices in regard to your experience or personal projects?
It’s extremely hard to reply to this because it all depends on the type of project, the environment you are using, the freedom you have, and, of course, time constraints. For sure I would use official and open-source frameworks that a) are judged safe and trackable and b) can be reshared and reused by others.
Apart from that I would also make sure to keep a record of every test and prepare good documentation. This tracking and tracing is vital for transparency, which supports future users following in similar footsteps.
Could you share an experience where you had to make an unconventional decision, which went against popular opinion or current trends, on an AI project? How did you navigate that situation?
It happens from time to time that I have to scale down or refuse some projects because they are done in a “rush mode” without the proper tools and risk-mitigation plans. Or the team might want to be an “AI first” company when there is really no need to use fancy and expensive AI for the results they want to obtain.
I usually explain the consequences and try to propose something different that can partially solve the challenge in the short term while working on a robust solution in the long term.
· 15 min read
Jeevantika Lingalwar on AI leadership, building International Women in Tech, and creating global opportunities for women in STEM.
· 9 min read
Founder of EdvanceAI on making AI accessible to non-technical professionals, leading through communication, and building practical AI education.
· 5 min read
Claire Derelle on entering AI from a non-technical path, building confidence in leadership, and approaching innovation with curiosity, creativity, and resilience.
Subscribe to our newsletter to get the latest updates on our blog and podcasts.