Women in AI: Sarah Bitamazir helps companies implement responsible AI


To give AI-focused female academics and others their deserved — and long-awaited — spotlight, TechCrunch is launching a series of interviews focused on notable women who have contributed to the AI ​​revolution.

Sarah Bitamazire is the Chief Policy Officer at boutique advisory firm Lumiera, where she also helps write the news magazine Lumiera Loop, which focuses on AI literacy and responsible AI adoption.

Prior to this, she worked as a policy advisor in Sweden, where she focused on gender equality, foreign affairs law, and security and defence policies.

Briefly tell us how you got your start in AI? What attracted you to this field?

AI found me! The impact of AI is constantly growing in the fields I am deeply involved in. It became very important for me to understand the importance of AI and its challenges so that I can provide the right advice to high-level decision makers.

First, in the defense and security sector where AI is used in research and development and active warfare. Secondly, in arts and culture, creators were among the groups that first saw the added value of AI as well as the challenges. They helped bring to light copyright issues that have come to the surface, such as the ongoing case in which several daily newspapers are suing OpenAI.

You know something is having a big impact when leaders with very different backgrounds and pain points ask their advisors, “Can you tell me about this? Everyone is talking about it.”

What work are you most proud of in the AI ​​field?

We recently worked with a client who had tried and failed to integrate AI into their research and development work. Lumiera established an AI integration strategy with a roadmap tailored to their specific needs and challenges. The combination of a curated AI project portfolio, a structured change management process, and leadership that recognized the value of multidisciplinary thinking made this project a huge success.

How do you cope with the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

By being very clear on why. I am actively involved in the AI ​​industry because it has a deeper purpose and a problem to solve. Lumiera’s mission is to provide leaders with comprehensive guidance so they can make responsible decisions with confidence in the technological age. This sense of purpose is always there, no matter what field we pursue. Male-dominated or not, the AI ​​industry is huge and constantly becoming complex. No one can see the whole picture, and we need more perspectives so we can learn from each other. The challenges that exist are huge, and we all need to collaborate.

What advice would you give to women wanting to enter the AI ​​field?

Getting into AI is like learning a new language or learning a new skill. It has immense potential to solve challenges across various sectors. What problem do you want to solve? Figure out how AI can be the solution, and then focus on solving that problem. Keep learning, and connect with people who inspire you.

As AI evolves, what are the most pressing issues facing it?

The speed at which AI is evolving is an issue in itself. I believe asking this question repeatedly and regularly is a vital part of moving honestly forward in the AI ​​field. We do this at Lumiera every week in our newsletter.

Here are some examples that are of most interest right now:

  • AI Hardware and Geopolitics: Public sector investment in AI hardware (GPUs) will likely increase as governments around the world deepen their AI knowledge and begin to take strategic and geopolitical steps. So far, there have been some moves in this direction from countries such as the UK, Japan, the UAE and Saudi Arabia. This is a sector to keep an eye on.
  • AI Benchmarks, As we become more reliant on AI, it is essential to understand how we measure and compare its performance. Choosing the right model for a given use case requires careful consideration. The best model for your needs is not necessarily the one at the top of the leaderboard. Since models are changing so rapidly, the accuracy of benchmarks will also fluctuate.
  • Balance automation with human oversight: Believe it or not, over-automation is a thing. Decision making requires human judgment, intuition, and contextual understanding. This cannot be replicated through automation.
  • Data Quality and Governance: Where is the good data?! Data flows into and out of organizations every second, across the organization. If that data is poorly managed, your organization will not benefit from AI. And in the long run, it can be detrimental. Your data strategy is your AI strategy. Data systems architecture, management, and ownership need to be part of the conversation.

What issues should AI users be aware of?

  • The algorithms and data aren’t perfect: As a user, it is important to be critical and not blindly trust the output, especially if you are using technology straight off the shelf. The technology and tools at the top are new and evolving, so keep that in mind and add common sense.
  • energy consumption: The combination of the computational requirements of training large AI models and the energy requirements of operating and cooling the necessary hardware infrastructure leads to huge power consumption. Gartner predicts that by 2030, AI could consume 3.5% of the world’s electricity.
  • Educate yourself, and use a variety of sources: AI literacy is important! To be able to make good use of AI in your life and work, you must be able to make informed decisions about its use. AI should help you make your decisions, not make decisions for you.
  • Perspective DensityYou need to involve people who know their problem area well so that they can understand what kinds of solutions can be built with AI, and do so throughout the AI ​​development life cycle.
  • The same applies to morality as wellThis is not something that can be added on top of an AI product after it has been built – ethical considerations need to be incorporated throughout the creation process and starting from the research stage. This is done by conducting social and ethical impact assessments, minimising biases, and promoting accountability and transparency.

When building AI, it is essential to recognise the skill limitations within an organisation. Gaps are opportunities for growth: they enable you to prioritise areas where you need external expertise and develop strong accountability mechanisms. All factors, including the current skill set, team capacity and available monetary resources, must be assessed. These factors, among others, will influence your AI roadmap.

How can investors better promote responsible AI?

First, as an investor, you want to ensure that your investment is solid and will last for the long term. Investing in responsible AI safeguards financial returns and reduces the risks associated with it, such as concerns around trust, regulation, and privacy.

Investors can push for responsible AI by looking at indicators of responsible AI leadership and use. A clear AI strategy, dedicated responsible AI resources, published responsible AI policies, strong governance practices, and the integration of human reinforcement feedback are factors to consider. These indicators should be part of a solid due diligence process. More science, less subjective decision-making. Distancing from unethical AI practices is another way to encourage responsible AI solutions.


Please enter your comment!
Please enter your name here

Share post:




More like this

Here are all the devices compatible with iOS 18

Apple's WWDC 2024 was full of announcements about iOS...

Will Smith broke Twitch’s biggest streaming record, the real reason why

Every summer, Spanish Twitch streamer Ibai Llanos hosts a...