Loading Now
×

Women in AI: Sandra Watcher, professor of data ethics at Oxford

Women in AI: Sandra Watcher, professor of data ethics at Oxford

Women in AI: Sandra Watcher, professor of data ethics at Oxford


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Sandra Wachter is a professor and senior researcher in data ethics, AI, robotics, algorithms and regulation at the Oxford Internet Institute. She’s also a former fellow of The Alan Turing Institute, the U.K.’s national institute for data science and AI.

While at the Turing Institute, Watcher evaluated the ethical and legal aspects of data science, highlighting cases where opaque algorithms have become racist and sexist. She also looked at ways to audit AI to tackle disinformation and promote fairness.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I do not remember a time in my life where I did not think that innovation and technology have incredible potential to make the lives of people better. Yet, I do also know that technology can have devastating consequences for people’s lives. And so I was always driven — not least due to my strong sense of justice — to find a way to guarantee that perfect middle ground. Enabling innovation while protecting human rights.

I always felt that law has a very important role to play. Law can be that enabling middle ground that both protects people but enables innovation. Law as a discipline came very naturally to me. I like challenges, I like to understand how a system works, to see how I can game it, find loopholes and subsequently close them.

AI is an incredibly transformative force. It is implemented in finance, employment, criminal justice, immigration, health and art. This can be good and bad. And whether it is good or bad is a matter of design and policy. I was naturally drawn to it because I felt that law can make a meaningful contribution in ensuring that innovation benefits as many people as possible.

What work are you most proud of (in the AI field)?

I think the piece of work I am currently most proud of is a co-authored piece with Brent Mittelstadt (a philosopher), Chris Russell (a computer scientist) and me as the lawyer.

Our latest work on bias and fairness, “The Unfairness of Fair Machine Learning,” revealed the harmful impact of enforcing many “group fairness” measures in practice. Specifically, fairness is achieved by “leveling down,” or making everyone worse off, rather than helping disadvantaged groups. This approach is very problematic in the context of EU and U.K. non-discrimination law as well as being ethically troubling. In a piece in Wired we discussed how harmful leveling down can be in practice — in healthcare, for example, enforcing group fairness could mean missing more cases of cancer than strictly necessary while also making a system less accurate overall.

For us this was terrifying and something that is important to know for people in tech, policy and really every human being. In fact we have engaged with U.K. and EU regulators and shared our alarming results with them. I deeply hope that this will give policymakers the necessary leverage to implement new policies that prevent AI from causing such serious harms.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry

The interesting thing is that I never saw technology as something that “belongs” to males. It was only when I started school that society told me that tech does not have room for people like me. I still remember that when I was 10 years old the curriculum dictated that girls had to do knitting and sewing while the boys were building birdhouses. I also wanted to build a birdhouse and requested to be transferred to the boys class, but I was told by my teachers that “girls do not do that.” I even went to the headmaster of the school trying to overturn the decision but unfortunately failed at that time.

It is very hard to fight against a stereotype that says you should not be part of this community. I wish I could say that that things like that do not happen anymore but this is unfortunately not true.

However, I have been incredibly lucky to work with allies like Brent Mittelstadt and Chris Russell. I had the privilege of incredible mentors such as my Ph.D. supervisor and I have a growing network of like-minded people of all genders that are doing their best to steer the path forward to improve the situation for everyone who is interested in tech.

What advice would you give to women seeking to enter the AI field?

Above all else try to find like-minded people and allies. Finding your people and supporting each other is crucial. My most impactful work has always come from talking with open-minded people from other backgrounds and disciplines to solve common problems we face. Accepted wisdom alone cannot solve novel problems, so women and other groups that have historically faced barriers to entering AI and other tech fields hold the tools to truly innovate and offer something new.

What are some of the most pressing issues facing AI as it evolves?

I think there are a wide range of issues that need serious legal and policy consideration. To name a few, AI is plagued by biased data which leads to discriminatory and unfair outcomes. AI is inherently opaque and difficult to understand, yet it is tasked to decide who gets a loan, who gets the job, who has to go to prison and who is allowed to go to university.

Generative AI has related issues but also contributes to misinformation, is riddled with hallucinations, violates data protection and intellectual property rights, puts people’s jobs at risks and contributes more to climate change than the aviation industry.

We have no time to lose; we need to have addressed these issues yesterday.

What are some issues AI users should be aware of?

I think there is a tendency to believe a certain narrative along the lines of “AI is here and here to stay, get on board or be left behind.” I think it is important to think about who is pushing this narrative and who profits from it. It is important to remember where the actual power lies. The power is not with those who innovate, it is with those who buy and implement AI.

So consumers and businesses should ask themselves, “Does this technology actually help me and in what regard?” Electric toothbrushes now have “AI” embedded in them. Who is this for? Who needs this? What is being improved here?

In other words, ask yourself what is broken and what needs fixing and whether AI can actually fix it.

This type of thinking will shift market power, and innovation will hopefully steer towards a direction that focuses on usefulness for a community rather than simply profit.

What is the best way to responsibly build AI?

Having laws in place that demand responsible AI. Here too a very unhelpful and untrue narrative tends to dominate: that regulation stifles innovation. This is not true. Regulation stifles harmful innovation. Good laws foster and nourish ethical innovation; this is why we have safe cars, planes, trains and bridges. Society does not lose out if regulation prevents the
creation of AI that violates human rights.

Traffic and safety regulations for cars were also said to “stifle innovation” and “limit autonomy.” These laws prevent people driving without licenses, prevent cars entering the market that do not have safety belts and airbags and punish people that do not drive according to the speed limit. Imagine what the automotive industry’s safety record would look like if we did not have laws to regulate vehicles and drivers. AI is currently at a similar inflection point, and heavy industry lobbying and political pressure means it still remains unclear which pathway it will take.

How can investors better push for responsible AI?

I wrote a paper a few years ago called “How Fair AI Can Make Us Richer.” I deeply believe that AI that respects human rights and is unbiased, explainable and sustainable is not only the legally, ethically and morally right thing to do, but can also be profitable.

I really hope that investors will understand that if they are pushing for responsible research and innovation that they will also get better products. Bad data, bad algorithms and bad design choices lead to worse products. Even if I cannot convince you that you should do the ethical thing because it is the right thing to do, I hope you will see that the ethical thing is also more profitable. Ethics should be seen as an investment, not a hurdle to overcome.



Source link