From Digital Age to Nano Age. WorldWide.

Tag: Women

Robotic Automations

Women in AI: Tara Chklovski is teaching the next generation of AI innovators | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Tara Chklovski is the CEO and founder of Technovation, a nonprofit that helps teach young girls about technology and entrepreneurship. She has led the company for the past 17 years, finding ways to help young women use technology to solve some of the world’s most pressing issues. She attended St. Stephen’s College in Delhi, before receiving a master’s at Boston University and a PhD at the University of Southern California in Aerospace Engineering.

Briefly, how did you get your start in AI? What attracted you to the field?

I started learning about AI in 2016 when we were invited to the AAAI (Association for the Advancement of Artificial Intelligence) Conference taking place in San Francisco, and we had a chance to interview a range of AI researchers using AI to tackle interesting problems ranging from space to stocks. Technovation is a nonprofit organization and our mission is to bring the most powerful, cutting-edge tools and technologies to the most underserved communities. AI felt powerful and right. So I decided to learn a lot about it!

We conducted a national survey of parents in 2017, asking them about their thoughts and concerns around AI, and we were blown away by how African American mothers were very interested in bringing AI literacy to their children, more so than any other demographic. We then launched the first global AI education program — the AI Family Challenge, supported by Google and Nvidia.

We continued to learn and iterate since then, and now we are the only global, project-based AI education program with a research-based curriculum that is translated into 12 languages.

What work are you most proud of in the AI field?

The fact that we are the only org that has a peer-reviewed research article on the impact of our project-based AI curriculum and that we have been able to bring it to tens of thousands of girls around the world.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

It is hard. We have many allies, but typically, power and influence lie with the CEOs, and they are usually male and do not fully empathize with the barriers that women face at every step. You become the CEO of a trillion-dollar company based on certain characteristics, and these characteristics may not be the same that enable you to empathize with others.

As far as solutions, society is becoming more educated, and both genders are becoming more sophisticated in empathy, mental health, psychological development, etc. My advice to those who support women in tech would be to be more bold in their investments so we can make more progress. We have enough research and data to know what works. We need more champions and advocates.

What advice would you give to women seeking to enter the AI field?

Start today. It is so easy to start messing around online with free and world-class lectures and courses. Find a problem that is interesting to you, and start learning and building. The Technovation curriculum is one great starting point as well, as it requires no prior technical background and by the end you would have created an AI-based startup.

What are some of the most pressing issues facing AI as it evolves?

[Society views] underserved groups as a monolithic group with no voice, agency, or talent — just waiting to be exploited. In fact, we have found that teenage girls are some of the earliest adopters of technology and have the coolest ideas. A Technovation team of girls created a ride-sharing and taxi-hailing app in December 2010. Another Technovation team created a mindfulness and focus app in March 2012. Today, Technovation teams are creating AI-based apps, building new datasets focused on groups in India, Africa, and Latin America — groups that are not being included in the apps coming out of Silicon Valley.

Instead of viewing these countries as just markets, consumers, and recipients, we need to view these groups as powerful collaborators who can help ensure that we are building truly innovative solutions to the complex problems facing humanity.

What are some issues AI users should be aware of?

These technologies are fast-moving. Be curious and peek under the hood as much as possible by learning how these models are working. This will help you become a curious and hopefully informed user.

What is the best way to responsibility build AI?

By training groups that are not normally part of the design and engineering teams, and then building better technologies with them as co-designers and builders. It doesn’t take that much more time, and the end product will be much more robust and innovative for the process.

How can investors better push for responsible AI?

Push for collaborations with global nonprofits that have access to diverse talent pools so that your engineers are talking to a broad set of users and incorporating their perspectives.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Anna Korhonen studies the intersection between linguistics and AI | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Anna Korhonen is a professor of natural language processing (NLP) at the University of Cambridge. She’s also a senior research fellow at Churchill College, a fellow at the Association for Computational Linguistics, and a fellow at the European Laboratory for Learning and Intelligent Systems.

Korhonen previously served as a fellow at the Alan Turing Institute and she has a PhD in computer science and master’s degrees in both computer science and linguistics. She researches NLP and how to develop, adapt and apply computational techniques to meet the needs of AI. She has a particular interest in responsible and “human-centric” NLP that — in her own words — “draws on the understanding of human cognitive, social and creative intelligence.”

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I was always fascinated by the beauty and complexity of human intelligence, particularly in relation to human language. However, my interest in STEM subjects and practical applications led me to study engineering and computer science. I chose to specialize in AI because it’s a field that allows me to combine all these interests.

What work are you most proud of in the AI field?

While the science of building intelligent machines is fascinating, and one can easily get lost in the world of language modeling, the ultimate reason we’re building AI is its practical potential. I’m most proud of the work where my fundamental research on natural language processing has led into the development of tools that can support social and global good. For example, tools that can help us better understand how diseases such as cancer or dementia develop and can be treated, or apps that can support education.

Much of my current research is driven by the mission to develop AI that can improve human lives for the better. AI has a huge positive potential for social and global good. A big part of my job as an educator is to encourage the next generation of AI scientists and leaders to focus on realizing that potential.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

I’m fortunate to be working in an area of AI where we do have a sizable female population and established support networks. I’ve found these immensely helpful in navigating career and personal challenges.

For me, the biggest problem is how the male-dominated industry sets the agenda for AI. The current arms race to develop ever-larger AI models at any cost is a great example. This has a huge impact on the priorities of both academia and industry, and wide-ranging socioeconomic and environmental implications. Do we need larger models, and what are their global costs and benefits? I feel we would’ve asked these questions a lot earlier in the game if we had better gender balance in the field.

What advice would you give to women seeking to enter the AI field?

AI desperately needs more women at all levels, but especially at the level of leadership. The current leadership culture isn’t necessarily attractive for women, but active involvement can change that culture — and ultimately the culture of AI. Women are infamously not always great at supporting each other. I would really like to see an attitude change in this respect: We need to actively network and help each other if we want to achieve better gender balance in this field.

What are some of the most pressing issues facing AI as it evolves?

AI has developed incredibly fast: It has evolved from an academic field to a global phenomenon in less than a single decade. During this time, most effort has gone toward scaling through massive data and computation. Little effort has been devoted to thinking how this technology should be developed so that it can best serve humanity. People have a good reason to worry about the safety and trustworthiness of AI and its impact on jobs, democracy, environment and other areas. We need to urgently put human needs and safety at the center of AI development.

What are some issues AI users should be aware of?

Current AI, even when seeming highly fluent, ultimately lacks the world knowledge of humans and the ability to understand the complex social contexts and norms we operate with. Even the best of today’s technology makes mistakes, and our ability to prevent or predict those mistakes is limited. AI can be a very useful tool for many tasks, but I would not trust it to educate my children or make important decisions for me. We humans should remain in charge.

What is the best way to responsibly build AI?

Developers of AI tend to think about ethics as an afterthought — after the technology has already been built. The best way to think about it is before any development begins. Questions such as, “Do I have a diverse enough team to develop a fair system?” or “Is my data really free to use and representative of all the users’ populations?” or “Are my techniques robust?” should really be asked at the outset.

Although we can address some of this problem via education, we can only enforce it via regulation. The recent development of national and global AI regulations is important and needs to continue to guarantee that future technologies will be safer and more trustworthy.

How can investors better push for responsible AI?

AI regulations are emerging and companies will ultimately need to comply. We can think of responsible AI as sustainable AI truly worth investing in.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Ewa Luger explores how AI affects culture — and vice versa | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Ewa Luger is co-director at the Institute of Design Informatics, and co-director of the Bridging Responsible AI Divides (BRAID) program, backed by the Arts and Humanities Research Council (AHRC). She works closely with policymakers and industry, and is a member of the U.K. Department for Culture, Media and Sport (DCMS) college of experts, a cohort of experts who provide scientific and technical advice to the DCMS.

Luger’s research explores social, ethical and interactional issues in the context of data-driven systems, including AI systems, with a particular interest in design, the distribution of power, spheres of exclusion, and user consent. Previously, she was a fellow at the Alan Turing Institute, served as a researcher at Microsoft, and was a fellow at Corpus Christi College at the University of Cambridge.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

After my PhD, I moved to Microsoft Research, where I worked in the user experience and design group in the Cambridge (U.K.) lab. AI was a core focus there, so my work naturally developed more fully into that area and expanded out into issues surrounding human-centered AI (e.g., intelligent voice assistants).

When I moved to the University of Edinburgh, it was due to a desire to explore issues of algorithmic intelligibility, which, back in 2016, was a niche area. I’ve found myself in the field of responsible AI and currently jointly lead a national program on the subject, funded by the AHRC.

What work are you most proud of in the AI field?

My most-cited work is a paper about the user experience of voice assistants (2016). It was the first study of its kind and is still highly cited. But the work I’m personally most proud of is ongoing. BRAID is a program I jointly lead, and is designed in partnership with a philosopher and ethicist. It’s a genuinely multidisciplinary effort designed to support the development of a responsible AI ecosystem in the U.K.

In partnership with the Ada Lovelace Institute and the BBC, it aims to connect arts and humanities knowledge to policy, regulation, industry and the voluntary sector. We often overlook the arts and humanities when it comes to AI, which has always seemed bizarre to me. When COVID-19 hit, the value of the creative industries was so profound; we know that learning from history is critical to avoid making the same mistakes, and philosophy is the root of the ethical frameworks that have kept us safe and informed within medical science for many years. Systems like Midjourney rely on artist and designer content as training data, and yet somehow these disciplines and practitioners have little to no voice in the field. We want to change that.

More practically, I’ve worked with industry partners like Microsoft and the BBC to co-produce responsible AI challenges, and we’ve worked together to find academics that can respond to those challenges. BRAID has funded 27 projects so far, some of which have been individual fellowships, and we have a new call going live soon.

We’re designing a free online course for stakeholders looking to engage with AI, setting up a forum where we hope to engage a cross-section of the population as well as other sectoral stakeholders to support governance of the work — and helping to explode some of the myths and hyperbole that surrounds AI at the moment.

I know that kind of narrative is what floats the current investment around AI, but it also serves to cultivate fear and confusion among those people who are most likely to suffer downstream harms. BRAID runs until the end of 2028, and in the next phase, we’ll be tackling AI literacy, spaces of resistance, and mechanisms for contestation and recourse. It’s a (relatively) large program at £15.9 million over six years, funded by the AHRC.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

That’s an interesting question. I’d start by saying that these issues aren’t solely issues found in industry, which is often perceived to be the case. The academic environment has very similar challenges with respect to gender equality. I’m currently co-director of an institute — Design Informatics — that brings together the school of design and the school of informatics, and so I’d say there’s a better balance both with respect to gender and with respect to the kinds of cultural issues that limit women reaching their full professional potential in the workplace.

But during my PhD, I was based in a male-dominated lab and, to a lesser extent, when I worked in industry. Setting aside the obvious effects of career breaks and caring, my experience has been of two interwoven dynamics. Firstly, there are much higher standards and expectations placed on women — for example, to be amenable, positive, kind, supportive, team-players and so on. Secondly, we’re often reticent when it comes to putting ourselves forward for opportunities that less-qualified men would quite aggressively go for. So I’ve had to push myself quite far out of my comfort zone on many occasions.

The other thing I’ve needed to do is to set very firm boundaries and learn when to say no. Women are often trained to be (and seen as) people pleasers. We can be too easily seen as the go-to person for the kinds of tasks that would be less attractive to your male colleagues, even to the extent of being assumed to be the tea-maker or note-taker in any meeting, irrespective of professional status. And it’s only really by saying no, and making sure that you’re aware of your value, that you ever end up being seen in a different light. It’s overly generalizing to say that this is true of all women, but it has certainly been my experience. I should say that I had a female manager while I was in industry, and she was wonderful, so the majority of sexism I’ve experienced has been within academia.

Overall, the issues are structural and cultural, and so navigating them takes effort — firstly in making them visible and secondly in actively addressing them. There are no simple fixes, and any navigation places yet more emotional labor on females in tech.

What advice would you give to women seeking to enter the AI field?

My advice has always been to go for opportunities that allow you to level up, even if you don’t feel that you’re 100% the right fit. Let them decline rather than you foreclosing opportunities yourself. Research shows that men go for roles they think they could do, but women only go for roles they feel they already can or are doing competently. Currently, there’s also a trend toward more gender awareness in the hiring process and among funders, although recent examples show how far we have to go.

If you look at U.K. Research and Innovation AI hubs, a recent high-profile, multi-million-pound investment, all of the nine AI research hubs announced recently are led by men. We should really be doing better to ensure gender representation.

What are some of the most pressing issues facing AI as it evolves?

Given my background, it’s perhaps unsurprising that I’d say that the most pressing issues facing AI are those related to the immediate and downstream harms that might occur if we’re not careful in the design, governance and use of AI systems.

The most pressing issue, and one that has been heavily under-researched, is the environmental impact of large-scale models. We might choose at some point to accept those impacts if the benefits of the application outweigh the risks. But right now, we’re seeing widespread use of systems like Midjourney run simply for fun, with users largely, if not completely, unaware of the impact each time they run a query.

Another pressing issue is how we reconcile the speed of AI innovations and the ability of the regulatory climate to keep up. It’s not a new issue, but regulation is the best instrument we have to ensure that AI systems are developed and deployed responsibly.

It’s very easy to assume that what has been called the democratization of AI — by this, I mean systems such as ChatGPT being so readily available to anyone — is a positive development. However, we’re already seeing the effects of generated content on the creative industries and creative practitioners, particularly regarding copyright and attribution. Journalism and news producers are also racing to ensure their content and brands are not affected. This latter point has huge implications for our democratic systems, particularly as we enter key election cycles. The effects could be quite literally world-changing from a geopolitical perspective. It also wouldn’t be a list of issues without at least a nod to bias.

What are some issues AI users should be aware of?

Not sure if this relates to companies using AI or regular citizens, but I’m assuming the latter. I think the main issue here is trust. I’m thinking, here, of the many students now using large language models to generate academic work. Setting aside the moral issues, the models are still not good enough for that. Citations are often incorrect or out of context, and the nuance of some academic papers is lost.

But this speaks to a wider point: You can’t yet fully trust generated text and so should only use those systems when the context or outcome is low risk. The obvious second issue is veracity and authenticity. As models become increasingly sophisticated, it’s going to be ever harder to know for sure whether it’s human or machine-generated. We haven’t yet developed, as a society, the requisite literacies to make reasoned judgments about content in an AI-rich media landscape. The old rules of media literacy apply in the interim: Check the source.

Another issue is that AI is not human intelligence, and so the models aren’t perfect — they can be tricked or corrupted with relative ease if one has a mind to.

What is the best way to responsibly build AI?

The best instruments we have are algorithmic impact assessments and regulatory compliance, but ideally, we’d be looking for processes that actively seek to do good rather than just seeking to minimize risk.

Going back to basics, the obvious first step is to address the composition of designers — ensuring that AI, informatics and computer science as disciplines attract women, people of color and representation from other cultures. It’s obviously not a quick fix, but we’d clearly have addressed the issue of bias earlier if it was more heterogeneous. That brings me to the issue of the data corpus, and ensuring that it’s fit-for-purpose and efforts are made to appropriately de-bias it.

Then there comes the need to train systems architects to be aware of moral and socio-technical issues — placing the same weight on these as we do the primary disciplines. Then we need to give systems architects more time and agency to consider and fix any potential issues. Then we come to the matter of governance and co-design, where stakeholders should be involved in the governance and conceptual design of the system. And finally, we need to thoroughly stress-test systems before they get anywhere near human subjects.

Ideally, we should also be ensuring that there are mechanisms in place for opt-out, contestation and recourse — though much of this is covered by emerging regulations. It seems obvious, but I’d also add that you should be prepared to kill a project that’s set to fail on any measure of responsibility. There’s often something of the fallacy of sunk costs at play here, but if a project isn’t developing as you’d hope, then raising your risk tolerance rather than killing it can result in the untimely death of a product.

The European Union’s recently adopted AI act covers much of this, of course.

How can investors better push for responsible AI?

Taking a step back here, it’s now generally understood and accepted that the whole model that underpins the internet is the monetization of user data. In the same way, much, if not all, of AI innovation is driven by capital gain. AI development in particular is a resource-hungry business, and the drive to be the first to market has often been described as an arms race. So, responsibility as a value is always in competition with those other values.

That’s not to say that companies don’t care, and there has also been much effort made by various AI ethicists to reframe responsibility as a way of actually distinguishing yourself in the field. But this feels like an unlikely scenario unless you’re a government or another public service. It’s clear that being the first to market is always going to be traded off against a full and comprehensive elimination of possible harms.

But coming back to the term responsibility. To my mind, being responsible is the least we can do. When we say to our kids that we’re trusting them to be responsible, what we mean is, don’t do anything illegal, embarrassing or insane. It’s literally the basement when it comes to behaving like a functioning human in the world. Conversely, when applied to companies, it becomes some kind of unreachable standard. You have to ask yourself, how is this even a discussion that we find ourselves having?

Also, the incentives to prioritize responsibility are pretty basic and relate to wanting to be a trusted entity while also not wanting your users to come to newsworthy harm. I say this because plenty of people at the poverty line, or those from marginalized groups, fall below the threshold of interest, as they don’t have the economic or social capital to contest any negative outcomes, or to raise them to public attention.

So, to loop back to the question, it depends on who the investors are. If it’s one of the big seven tech companies, then they’re covered by the above. They have to choose to prioritize different values at all times, and not only when it suits them. For the public or third sector, responsible AI is already aligned to their values, and so what they tend to need is sufficient experience and insight to help make the right and informed choices. Ultimately, to push for responsible AI requires an alignment of values and incentives.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Allison Cohen on building responsible AI projects | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Allison Cohen is the senior applied AI projects manager at Mila, a Quebec-based community of more than 1,200 researchers specializing in AI and machine learning. She works with researchers, social scientists and external partners to deploy socially beneficial AI projects. Cohen’s portfolio of work includes a tool that detects misogyny, an app to identify online activity from suspected human trafficking victims, and an agricultural app to recommend sustainable farming practices in Rwanda.

Previously, Cohen was a co-lead on AI drug discovery at the Global Partnership on Artificial Intelligence, an organization to guide the responsible development and use of AI. She’s also served as an AI strategy consultant at Deloitte and a project consultant at the Center for International Digital Policy, an independent Canadian think tank.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

The realization that we could mathematically model everything from recognizing faces to negotiating trade deals changed the way I saw the world, which is what made AI so compelling to me. Ironically, now that I work in AI, I see that we can’t — and in many cases shouldn’t — be capturing these kinds of phenomena with algorithms.

I was exposed to the field while I was completing a master’s in global affairs at the University of Toronto. The program was designed to teach students to navigate the systems affecting the world order — everything from macroeconomics to international law to human psychology. As I learned more about AI, though, I recognized how vital it would become to world politics, and how important it was to educate myself on the topic.

What allowed me to break into the field was an essay-writing competition. For the competition, I wrote a paper describing how psychedelic drugs would help humans stay competitive in a labor market riddled with AI, which qualified me to attend the St. Gallen Symposium in 2018 (it was a creative writing piece). My invitation, and subsequent participation in that event, gave me the confidence to continue pursuing my interest in the field.

What work are you most proud of in the AI field?

One of the projects I managed involved building a dataset containing instances of subtle and overt expressions of bias against women.

For this project, staffing and managing a multidisciplinary team of natural language processing experts, linguists and gender studies specialists throughout the entire project life cycle was crucial. It’s something that I’m quite proud of. I learned firsthand why this process is fundamental to building responsible applications, and also why it’s not done enough — it’s hard work! If you can support each of these stakeholders in communicating effectively across disciplines, you can facilitate work that blends decades-long traditions from the social sciences and cutting-edge developments in computer science.

I’m also proud that this project was well received by the community. One of our papers got a spotlight recognition in the socially responsible language modeling workshop at one of the leading AI conferences, NeurIPS. Also, this work inspired a similar interdisciplinary process that was managed by AI Sweden, which adapted the work to fit Swedish notions and expressions of misogyny.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

It’s unfortunate that in such a cutting-edge industry, we’re still seeing problematic gender dynamics. It’s not just adversely affecting women — all of us are losing. I’ve been quite inspired by a concept called “feminist standpoint theory” that I learned about in Sasha Costanza-Chock’s book, “Design Justice.” \

The theory claims that marginalized communities, whose knowledge and experiences don’t benefit from the same privileges as others, have an awareness of the world that can bring about fair and inclusive change. Of course, not all marginalized communities are the same, and neither are the experiences of individuals within those communities.

That said, a variety of perspectives from those groups are critical in helping us navigate, challenge and dismantle all kinds of structural challenges and inequities. That’s why a failure to include women can keep the field of AI exclusionary for an even wider swath of the population, reinforcing power dynamics outside of the field as well.

In terms of how I’ve handled a male-dominated industry, I’ve found allies to be quite important. These allies are a product of strong and trusting relationships. For example, I’ve been very fortunate to have friends like Peter Kurzwelly, who’s shared his expertise in podcasting to support me in the creation of a female-led and -centered podcast called “The World We’re Building.” This podcast allows us to elevate the work of even more women and non-binary people in the field of AI.

What advice would you give to women seeking to enter the AI field?

Find an open door. It doesn’t have to be paid, it doesn’t have to be a career and it doesn’t even have to be aligned with your background or experience. If you can find an opening, you can use it to hone your voice in the space and build from there. If you’re volunteering, give it your all — it’ll allow you to stand out and hopefully get paid for your work as soon as possible.

Of course, there’s privilege in being able to volunteer, which I also want to acknowledge.

When I lost my job during the pandemic and unemployment was at an all-time high in Canada, very few companies were looking to hire AI talent, and those that were hiring weren’t looking for global affairs students with eight months’ experience in consulting. While applying for jobs, I began volunteering with an AI ethics organization.

One of the projects I worked on while volunteering was about whether there should be copyright protection for art produced by AI. I reached out to a lawyer at a Canadian AI law firm to better understand the space. She connected me with someone at CIFAR, who connected me with Benjamin Prud’homme, the executive director of Mila’s AI for Humanity Team. It’s amazing to think that through a series of exchanges about AI art, I learned about a career opportunity that has since transformed my life.

What are some of the most pressing issues facing AI as it evolves?

I have three answers to this question that are somewhat interconnected. I think we need to figure out:

  1. How to reconcile the fact that AI is built to be scaled while ensuring that the tools we’re building are adapted to fit local knowledge, experience and needs.
  2. If we’re to build tools that are adapted to the local context, we’re going to need to incorporate anthropologists and sociologists into the AI design process. But there are a plethora of incentive structures and other obstacles preventing meaningful interdisciplinary collaboration. How can we overcome this?
  3. How can we affect the design process even more profoundly than simply incorporating multidisciplinary expertise? Specifically, how can we alter the incentives such that we’re designing tools built for those who need it most urgently rather than those whose data or business is most profitable?

What are some issues AI users should be aware of?

Labor exploitation is one of the issues that I don’t think gets enough coverage. There are many AI models that learn from labeled data using supervised learning methods. When the model relies on labeled data, there are people that have to do this tagging (i.e., someone adds the label “cat” to an image of a cat). These people (annotators) are often the subjects of exploitative practices. For models that don’t require the data to be labeled during the training process (as is the case with some generative AI and other foundation models), datasets can still be built exploitatively in that the developers often don’t obtain consent nor provide compensation or credit to the data creators.

I would recommend checking out the work of Krystal Kauffman, who I was so glad to see featured in this TechCrunch series. She’s making headway in advocating for annotators’ labor rights, including a living wage, the end to “mass rejection” practices, and engagement practices that align with fundamental human rights (in response to developments like intrusive surveillance).

What is the best way to responsibly build AI?

Folks often look to ethical AI principles in order to claim that their technology is responsible. Unfortunately, ethical reflection can only begin after a number of decisions have already been made, including but not limited to:

  1. What are you building?
  2. How are you building it?
  3. How will it be deployed?

If you wait until after these decisions have been made, you’ll have missed countless opportunities to build responsible technology.

In my experience, the best way to build responsible AI is to be cognizant of — from the earliest stages of your process — how your problem is defined and whose interests it satisfies; how the orientation supports or challenges pre-existing power dynamics; and which communities will be empowered or disempowered through the AI’s use.

If you want to create meaningful solutions, you must navigate these systems of power thoughtfully.

How can investors better push for responsible AI?

Ask about the team’s values. If the values are defined, at least, in part, by the local community and there’s a degree of accountability to that community, it’s more likely that the team will incorporate responsible practices.


Software Development in Sri Lanka

Robotic Automations

What we've learned from the women behind the AI revolution | TechCrunch


The AI boom, love it or find it to be a bit more hype than substance, is here to stay. That means lots of companies raising oodles of dollars, a healthy dose of regulatory concern, academic work, and corporate jockeying. For startups, it means a huge opportunity to bring new technology to bear on a host of industries that could use a bit of polish.

But if you read the news, you might notice that men are far and away the most cited, and discussed, players in AI today. So, TechCrunch’s Dominic-Madori Davis and Kyle Wiggers decided to talk to women working in AI to learn more about their work, how they got into the world of artificial intelligence, and more. The series has been running for some time now, so it was the perfect moment to get the pair onto the Equity podcast for a chat about the project.

Thus far they have interviewed folks like Irene Solaiman, head of global policy at Hugging Face; Sarah Kreps, professor of government at Cornell; and Heidy Khlaaf, safety engineering director at Trail of Bits.

Don’t forget that the Equity crew run interviews often in addition to our regular programming, which comes out Monday (a weekly kick-off show), Wednesday (our startups-focused news rundown), and Friday (our roundtable discussion of the biggest news from the week). See you bright and early Monday morning for more!

Equity is TechCrunch’s flagship podcast and posts every Monday, Wednesday and Friday. You can subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts.

You also can follow Equity on X and Threads, at @EquityPod.

For the full interview transcript, for those who prefer reading over listening, read on, or check out our full archive of episodes over at Simplecast.




Software Development in Sri Lanka

Robotic Automations

Women in AI: Kristine Gloria tells women to enter the field and 'follow your curiosity' | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Kristine Gloria formerly led the Aspen Institute’s Emergent and Intelligent Technologies Initiative — the Aspen Institute being the D.C.-headquartered think tank focused on values-based leadership and policy expertise. Gloria holds a PhD in cognitive science and a Master’s in media studies, and her past work includes research at MIT’s Internet Policy Research Initiative, the San Francisco-based Startup Policy Lab and the Center for Society, Technology and Policy at UC Berkeley.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

To be frank, I definitely didn’t start my career in pursuit of being in AI. First, I was really interested in understanding the intersection of technology and public policy. At the time, I was working on my Master’s in media studies, exploring ideas around remix culture and intellectual property. I was living and working in D.C. as an Archer Fellow for the New America Foundation. One day, I distinctly remember sitting in a room filled with public policymakers and politicians who were throwing around terms that didn’t quite fit their actual technical definitions. It was shortly after this meeting that I realized that in order to move the needle on public policy, I needed the credentials. I went back to school, earning my doctorate in cognitive science with a concentration on semantic technologies and online consumer privacy. I was very fortunate to have found a mentor and advisor and lab that encouraged a cross-disciplinary understanding of how technology is designed and built. So, I sharpened my technical skills alongside developing a more critical viewpoint on the many ways tech intersects our lives. In my role as the director of AI at the Aspen Institute, I then had the privilege to ideate, engage and collaborate with some of the leading thinkers in AI. And I always found myself gravitating towards those who took the time to deeply question if and how AI would impact our day-to-day lives.

Over the years, I’ve led various AI initiatives and one of the most meaningful is just getting started. Now, as a founding team member and director of strategic partnerships and innovation at a new nonprofit, Young Futures, I’m excited to weave in this type of thinking to achieve our mission of making the digital world an easier place to grow up. Specifically, as generative AI becomes table stakes and as new technologies come online, it’s both urgent and critical that we help preteens, teens and their support units navigate this vast digital wilderness together.

What work are you most proud of (in the AI field)?

I’m most proud of two initiatives. First is my work related to surfacing the tensions, pitfalls and effects of AI on marginalized communities. Published in 2021, “Power and Progress in Algorithmic Bias” articulates months of stakeholder engagement and research around this issue. In the report, we posit one of my all-time favorite questions: “How can we (data and algorithmic operators) recast our own models to forecast for a different future, one that centers around the needs of the most vulnerable?” Safiya Noble is the original author of that question, and it’s a constant consideration throughout my work. The second most important initiative recently came from my time as head of Data at Blue Fever, a company on the mission to improve youth well-being in a judgment-free and inclusive online space. Specifically, I led the design and development of Blue, the first AI emotional support companion. I learned a lot in this process. Most saliently, I gained a profound new appreciation for the impact a virtual companion can have on someone who’s struggling or who may not have the support systems in place. Blue was designed and built to bring its “big-sibling energy” to help guide users to reflect on their mental and emotional needs.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

Unfortunately, the challenges are real and still very current. I’ve experienced my fair share of disbelief in my skills and experience among all types of colleagues in the space. But, for every single one of those negative challenges, I can point to an example of a male colleague being my fiercest cheerleader. It’s a tough environment, and I hold on to these examples to help manage. I also think that so much has changed in this space even in the last five years. The necessary skill sets and professional experiences that qualify as part of “AI” are not strictly computer science-focused anymore.

What advice would you give to women seeking to enter the AI field?

Enter in and follow your curiosity. This space is in constant motion, and the most interesting (and likely most productive) pursuit is to continuously be critically optimistic about the field itself.

What are some of the most pressing issues facing AI as it evolves?

I actually think some of the most pressing issues facing AI are the same issues we’ve not quite gotten right since the web was first introduced. These are issues around agency, autonomy, privacy, fairness, equity and so on. These are core to how we situate ourselves amongst the machines. Yes, AI can make it vastly more complicated — but so can socio-political shifts.

What are some issues AI users should be aware of?

AI users should be aware of how these systems complicate or enhance their own agency and autonomy. In addition, as the discourse around how technology, and particularly AI, may impact our well-being, it’s important to remember there are tried-and-true tools to manage more negative outcomes.

What is the best way to responsibly build AI?

A responsible build of AI is more than just the code. A truly responsible build takes into account the design, governance, policies and business model. All drive the other, and we will continue to fall short if we only strive to address one part of the build.

How can investors better push for responsible AI

One specific task, which I admire Mozilla Ventures for requiring in its diligence, is an AI model card. Developed by Timnit Gebru and others, this practice of creating model cards enables teams — like funders — to evaluate the risks and safety issues of AI models used in a system. Also, related to the above, investors should holistically evaluate the system in its capacity and ability to be built responsibly. For example, if you have trust and safety features in the build or a model card published, but your revenue model exploits vulnerable population data, then there’s misalignment to your intent as an investor. I do think you can build responsibly and still be profitable. Lastly, I would love to see more collaborative funding opportunities among investors. In the realm of well-being and mental health, the solutions will be varied and vast as no person is the same and no one solution can solve for all. Collective action among investors who are interested in solving the problem would be a welcome addition.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Brandie Nonnecke of UC Berkeley says investors should insist on responsible AI practices | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Brandie Nonnecke is the founding director of the CITRIS Policy Lab, headquartered at UC Berkeley, which supports interdisciplinary research to address questions around the role of regulation in promoting innovation. Nonnecke also co-directs the Berkeley Center for Law and Technology, where she leads projects on AI, platforms and society, and the UC Berkeley AI Policy Hub, an initiative to train researchers to develop effective AI governance and policy frameworks.

In her spare time, Nonnecke hosts a video and podcast series, TecHype, that analyzes emerging tech policies, regulations and laws, providing insights into the benefits and risks and identifying strategies to harness tech for good.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I’ve been working in responsible AI governance for nearly a decade. My training in technology, public policy and their intersection with societal impacts drew me into the field. AI is already pervasive and profoundly impactful in our lives — for better and for worse. It’s important to me to meaningfully contribute to society’s ability to harness this technology for good rather than stand on the sidelines.

What work are you most proud of (in the AI field)?

I’m really proud of two things we’ve accomplished. First, The University of California was the first university to establish responsible AI principles and a governance structure to better ensure responsible procurement and use of AI. We take our commitment to serve the public in a responsible manner seriously. I had the honor of co-chairing the UC Presidential Working Group on AI and its subsequent permanent AI Council. In these roles, I’ve been able to gain firsthand experience thinking through how to best operationalize our responsible AI principles in order to safeguard our faculty, staff, students and the broader communities we serve. Second, I think it’s critical that the public understand emerging technologies and their real benefits and risks. We launched TecHype, a video and podcast series that demystifies emerging technologies and provides guidance on effective technical and policy interventions.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

Be curious, persistent and undeterred by imposter syndrome. I’ve found it crucial to seek out mentors who support diversity and inclusion, and to offer the same support to others entering the field. Building inclusive communities in tech has been a powerful way to share experiences, advice and encouragement.

What advice would you give to women seeking to enter the AI field?

For women entering the AI field, my advice is threefold: Seek knowledge relentlessly, as AI is a rapidly evolving field. Embrace networking, as connections will open doors to opportunities and offer invaluable support. And advocate for yourself and others, as your voice is essential in shaping an inclusive, equitable future for AI. Remember, your unique perspectives and experiences enrich the field and drive innovation.

What are some of the most pressing issues facing AI as it evolves?

I believe one of the most pressing issues facing AI as it evolves is to not get hung up on the latest hype cycles. We’re seeing this now with generative AI. Sure, generative AI presents significant advancements and will have tremendous impact — good and bad. But other forms of machine learning are in use today that are surreptitiously making decisions that directly affect everyone’s ability to exercise their rights. Rather than focusing on the latest marvels of machine learning, it’s more important that we focus on how and where machine learning is being applied regardless of its technological prowess.

What are some issues AI users should be aware of?

AI users should be aware of issues related to data privacy and security, the potential for bias in AI decision-making and the importance of transparency in how AI systems operate and make decisions. Understanding these issues can empower users to demand more accountable and equitable AI systems.

What is the best way to responsibly build AI?

Responsibly building AI involves integrating ethical considerations at every stage of development and deployment. This includes diverse stakeholder engagement, transparent methodologies, bias management strategies and ongoing impact assessments. Prioritizing the public good and ensuring AI technologies are developed with human rights, fairness and inclusivity at their core are fundamental.

How can investors better push for responsible AI?

This is such an important question! For a long time we never expressly discussed the role of investors. I cannot express enough how impactful investors are! I believe the trope that “regulation stifles innovation” is overused and is often untrue. Instead, I firmly believe smaller firms can experience a late-mover advantage and learn from the larger AI companies that have been developing responsible AI practices and the guidance emerging from academia, civil society and government. Investors have the power to shape the industry’s direction by making responsible AI practices a critical factor in their investment decisions. This includes supporting initiatives that focus on addressing social challenges through AI, promoting diversity and inclusion within the AI workforce and advocating for strong governance and technical strategies that help to ensure AI technologies benefit society as a whole.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Emilia Gómez at the EU started her AI career with music | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Emilia Gómez is a principal investigator at the European Commission’s Joint Research Centre and scientific coordinator of AI Watch, the EC initiative to monitor the advancements, uptake and impact of AI in Europe. Her team contributes with scientific and technical knowledge to EC AI policies, including the recently proposed AI Act.

Gómez’s research is grounded in the computational music field, where she contributes to the understanding of the way humans describe music and the methods in which it’s modeled digitally. Starting from the music domain, Gómez investigates the impact of AI on human behavior — in particular the effects on jobs, decisions and child cognitive and socioemotional development.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I started my research in AI, in particular in machine learning, as a developer of algorithms for the automatic description of music audio signals in terms of melody, tonality, similarity, style or emotion, which are exploited in different applications from music platforms to education. I started to research how to design novel machine learning approaches dealing with different computational tasks in the music field, and on the relevance of the data pipeline including data set creation and annotation. What I liked at the time from machine learning was its modelling capabilities and the shift from knowledge-driven to data-driven algorithm design — e.g. instead of designing descriptors based on our knowledge of acoustics and music, we were now using our know-how to design data sets, architectures and training and evaluation procedures.

From my experience as a machine learning researcher, and seeing my algorithms “in action” in different domains, from music platforms to symphonic music concerts, I realized the huge impact that those algorithms have on people (e.g. listeners, musicians) and directed my research toward AI evaluation rather than development, in particular on studying the impact of AI on human behavior and how to evaluate systems in terms of aspects such as fairness, human oversight or transparency. This is my team’s current research topic at the Joint Research Centre.

What work are you most proud of (in the AI field)?

On the academic and technical side, I’m proud of my contributions to music-specific machine learning architectures at the Music Technology Group in Barcelona, which have advanced the state of the art in the field, as it’s reflected in my citation records. For instance, during my PhD I proposed a data-driven algorithm to extract tonality from audio signals (e.g. if a musical piece is in C major or D minor) which has become a key reference in the field, and later I co-designed machine learning methods for the automatic description of music signals in terms of melody (e.g. used to search for songs by humming), tempo or for the modeling of emotions in music. Most of these algorithms are currently integrated into Essentia, an open source library for audio and music analysis, description and synthesis and have been exploited in many recommender systems.

I’m particularly proud of Banda Sonora Vital (LifeSoundTrack), a project awarded by Red Cross Award for Humanitarian Technologies, where we developed a personalized music recommender adapted to senior Alzheimer patients. There’s also PHENICX, a large European Union (EU)-funded project I coordinated on the use of music; and AI to create enriched symphonic music experiences.

I love the music computing community and I was happy to become the first female president of the International Society for Music Information Retrieval, to which I’ve been contributing all my career, with a special interest in increasing diversity in the field.

Currently, in my role at the Commission, which I joined in 2018 as lead scientist, I provide scientific and technical support to AI policies developed in the EU, notably the AI Act. From this recent work, which is less visible in terms of publications, I’m proud of my humble technical contributions to the AI Act — I say “humble” as you may guess there are many people involved here! As an example, there’s a lot of work I contributed to on the harmonization or translation between legal and technical terms (e.g. proposing definitions grounded in existing literature) and on assessing the practical implementation of legal requirements, such as transparency or technical documentation for high-risk AI systems, general-purpose AI models and generative AI.

I’m also quite proud of my team’s work in supporting the EU AI liability directive, where we studied, among others, particular characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self- and continuous-learning capabilities, and assessed associated difficulties presented when it comes to proving causation.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

It’s not only tech — I’m also navigating a male-dominated AI research and policy field! I don’t have a technique or a strategy, as it’s the only environment I know. I don’t know how it would be to work in a diverse or a female-dominated working environment. “Wouldn’t it be nice?,” like the Beach Boys’ song goes. I honestly try to avoid frustration and have fun in this challenging scenario, working in a world dominated by very assertive guys and enjoying collaborating with excellent women in the field.

What advice would you give to women seeking to enter the AI field?

I would tell them two things:

You’re much needed — please enter our field, as there’s an urgent need for diversity of visions, approaches and ideas. For instance, according to the divinAI project — a project I co-founded on monitoring diversity in the AI field — only 23% of author names at the International Conference on Machine Learning and 29% at the International Joint Conference on AI in 2023 were female, regardless of their gender identity.

You aren’t alone — there are many women, nonbinary colleagues and male allies in the field, even though we may not be so visible or recognized. Look for them and get their mentoring and support! In this context, there are many affinity groups present in the research field. For instance, when I became president of the International Society for Music Information Retrieval, I was very active in the Women in Music Information Retrieval initiative, a pioneer in diversity efforts in music computing with a very successful mentoring program.

What are some of the most pressing issues facing AI as it evolves?

In my opinion, researchers should devote as many efforts to AI development as to AI evaluation, as there’s now a lack of balance. The research community is so busy advancing the state of the art in terms of AI capabilities and performance and so excited to see their algorithms used in the real world that they forget to do proper evaluations, impact assessment and external audits. The more intelligent AI systems are, the more intelligent their evaluations should be. The AI evaluation field is under-studied, and this is the cause of many incidents that give AI a bad reputation, e.g. gender or racial biases present in data sets or algorithms.

What are some issues AI users should be aware of?

Citizens using AI-powered tools, like chatbots, should know that AI is not magic. Artificial intelligence is a product of human intelligence. They should learn about the working principles and limitations of AI algorithms to be able to challenge them and use them in a responsible way. It’s also important for citizens to be informed about the quality of AI products, how they are assessed or certified, so that they know which ones they can trust.

What is the best way to responsibly build AI?

In my view, the best way to develop AI products (with a good social and environmental impact and in a responsible way) is to spend the needed resources on evaluation, assessment of social impact and mitigation of risks — for instance, to fundamental rights — before placing an AI system in the market. This is for the benefit of businesses and trust on products, but also of society.

Responsible AI or trustworthy AI is a way to build algorithms where aspects such as transparency, fairness, human oversight or social and environmental well-being need to be addressed from the very beginning of the AI design process. In this sense, the AI Act not only sets the bar for regulating artificial intelligence worldwide, but it also reflects the European emphasis on trustworthiness and transparency — enabling innovation while protecting citizens’ rights. This I feel will increase citizen trust in the product and technology.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Urvashi Aneja is researching the social impact of AI in India | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Urvashi Aneja is the founding director of Digital Futures Lab, an interdisciplinary research effort that seeks to examine the interaction between technology and society in the Global South. She’s also an associate fellow at the Asia Pacific program at Chatham House, an independent policy institute based in London.

Aneja’s current research focuses on the societal impact of algorithmic decision-making systems in India, where she’s based, and platform governance. Aneja recently authored a study on the current uses of AI in India, reviewing use cases across sectors including policing and agriculture.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I started my career in research and policy engagement in the humanitarian sector. For several years, I studied the use of digital technologies in protracted crises in low-resource contexts. I quickly learned that there’s a fine line between innovation and experimentation, particularly when dealing with vulnerable populations. The learnings from this experience made me deeply concerned about the techno-solutionist narratives around the potential of digital technologies, particularly AI. At the same time, India had launched its Digital India mission and National Strategy for Artificial Intelligence. I was troubled by the dominant narratives that saw AI as a silver bullet for India’s complex socio-economic problems, and the complete lack of critical discourse around the issue.

What work are you most proud of (in the AI field)?

I’m proud that we’ve been able to draw attention to the political economy of AI production as well as broader implications for social justice, labor relations and environmental sustainability. Very often narratives on AI focus on the gains of specific applications, and at best, the benefits and risks of that application. But this misses the forest for the trees — a product-oriented lens obscures the broader structural impacts such as the contribution of AI to epistemic injustice, deskilling of labor and the perpetuation of unaccountable power in the majority world. I’m also proud that we’ve been able to translate these concerns into concrete policy and regulation — whether designing procurement guidelines for AI use in the public sector or delivering evidence in legal proceedings against Big Tech companies in the Global South.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

By letting my work do the talking. And by constantly asking: why?

What advice would you give to women seeking to enter the AI field?

Develop your knowledge and expertise. Make sure your technical understanding of issues is sound, but don’t focus narrowly only on AI. Instead, study widely so that you can draw connections across fields and disciplines. Not enough people understand AI as a socio-technical system that’s a product of history and culture.

What are some of the most pressing issues facing AI as it evolves?

I think the most pressing issue is the concentration of power within a handful of technology companies. While not new, this problem is exacerbated by new developments in large language models and generative AI. Many of these companies are now fanning fears around the existential risks of AI. Not only is this a distraction from the existing harms, but it also positions these companies as necessary for addressing AI-related harms. In many ways, we’re losing some of the momentum of the “tech-lash” that arose following the Cambridge Analytica episode. In places like India, I also worry that AI is being positioned as necessary for socioeconomic development, presenting an opportunity to leapfrog persistent challenges. Not only does this exaggerate AI’s potential, but it also disregards the point that it isn’t possible to leapfrog the institutional development needed to develop safeguards. Another issue that we’re not considering seriously enough is the environmental impacts of AI — the current trajectory is likely to be unsustainable. In the current ecosystem, those most vulnerable to the impacts of climate change are unlikely to be the beneficiaries of AI innovation.

What are some issues AI users should be aware of?

Users need to be made aware that AI isn’t magic, nor anything close to human intelligence. It’s a form of computational statistics that has many beneficial uses, but is ultimately only a probabilistic guess based on historical or previous patterns. I’m sure there are several other issues users also need to be aware of, but I want to caution that we should be wary of attempts to shift responsibility downstream, onto users. I see this most recently with the use of generative AI tools in low-resource contexts in the majority world — rather than be cautious about these experimental and unreliable technologies, the focus often shifts to how end-users, such as farmers or front-line health workers, need to up-skill.

What is the best way to responsibly build AI?

This must start with assessing the need for AI in the first place. Is there a problem that AI can uniquely solve or are other means possible? And if we’re to build AI, is a complex, black-box model necessary, or might a simpler logic-based model do just as well? We also need to re-center domain knowledge into the building of AI. In the obsession with big data, we’ve sacrificed theory — we need to build a theory of change based on domain knowledge and this should be the basis of the models we’re building, not just big data alone. This is of course in addition to key issues such as participation, inclusive teams, labor rights and so on.

How can investors better push for responsible AI?

Investors need to consider the entire life cycle of AI production — not just the outputs or outcomes of AI applications. This would require looking at a range of issues such as whether labor is fairly valued, the environmental impacts, the business model of the company (i.e. is it based on commercial surveillance?) and internal accountability measures within the company. Investors also need to ask for better and more rigorous evidence about the supposed benefits of AI.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Kathi Vidal at the USPTO has been working on AI since the early 1990s | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Kathi Vidal is an American intellectual property lawyer and former engineer who serves as director of the United States Patent and Trademark Office (USPTO).

Vidal began her career as an engineer for General Electric and Lockheed Martin, working in the areas of AI, software engineering and circuitry. She has a Bachelor’s degree in electrical engineering from Binghamton University, a Master’s degree in electrical engineering from Syracuse University and a JD from the University of Pennsylvania Law School.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

When I started college at 16, I was interested in scientific problem solving. I had an oscilloscope that I purchased at a garage sale that I was constantly tinkering with, and I loved working on my Dodge Dart! This early fascination led me to GE’s Edison Engineering Program as one of two women selected into the program. We engaged in weekly technical problem-solving across engineering and scientific disciplines on top of rotational work assignments in different technical fields. When I was approached to work on a three-person team working in the field of artificial intelligence, I jumped at it. The ability to engage in new, groundbreaking work in the early 1990s that could be applied across scientific and engineering disciplines to come up with ways to more creatively innovate was thrilling. I saw it as a way of getting away from the rigidity of current design principles and to more closely emulate the nuances humans bring to problem-solving.

What work are you most proud of (in the AI field)?

It would be a tie between my current work on U.S. government AI policies at the intersection of AI and innovation and my work developing the first AI fault diagnostic system for aircraft. As to the latter, I worked across neural networks, fuzzy logic and expert systems to build a resilient, self-learning system in the early 1990s. Though I left for law school before the system was deployed, I was excited to create something new in the relatively nascent AI space (compared to where AI is today) and to work with the PhDs at GE Research to share learnings across our projects. I was so excited about AI that I ended up writing my Master’s thesis on my work.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

Candidly, in the 1990s, the way I navigated the challenges in the engineering field was by conforming (without realizing I was conforming). It was a different time, and it probably goes without saying that most leadership positions in engineering and in law firms were more male-dominated than they are today. It was suggested to me by some of my male colleagues that I needed to learn how to laugh less. But I found joy in life and what I was doing! I remember speaking in front of a room full of women at a women’s conference we created in the mid-2000s (before women’s conferences became the norm). When I finished speaking, a number of audience members came up to congratulate me on my speech and tell me that they had never seen me so lively and animated. And I was speaking about patent law. It was then that I had an “aha” moment — being appreciated for being authentic was how I felt included and successful at my job.

Since that time, I’ve been deliberate about being authentic and creating inclusive environments where women can thrive. For example, I’ve revamped hiring and promotion practices in organizations where I’ve served. Most recently at USPTO, our agency saw a nearly 5% increase in diversity among our leadership ranks within one year due to these changes. I’ve worked to champion policies that open the doors for more women to participate in innovation, recognizing that while more than 40% of those who use our free legal services to file patent applications identify as women, only 13% of patented inventors are women — so we’re working hard to close that gap. Along with U.S. Secretary of Commerce Gina Raimondo, I founded the Women’s Entrepreneurship initiative across the U.S. Commerce Department to empower more women business leaders and arm them with the information and assistance they need to be successful, and I proudly advance policies to uplift not only women but other communities that have been historically underrepresented in our innovation ecosystem through my work helping lead the Council for Inclusive Innovation and the Economic Development Administration’s National Advisory Council on Innovation and Entrepreneurship. I also spend time mentoring others in my free time, sharing lessons learned and developing the next generation of leaders and advocates. I obviously can’t do any of this work alone — it’s all through and with like-minded women and men.

What advice would you give to women seeking to enter the AI field?

First, we need you, so keep going. It’s important to have women involved in shaping AI models of the future in order to mitigate bias or safety risks. And there are so many trailblazers out there — Fei-Fei Li at Stanford and Elham Tabassi at the National Institute of Standards and Technology (NIST), to name a couple. I’m honored to work alongside incredible leaders at the forefront of AI — Secretary Raimondo and Zoë Baird at the Department of Commerce, NIST Director Laurie Locascio, Copyright Office Director Shira Perlmutter and the new lead of the AI Safety Institute Elizabeth Kelly. It’s imperative that we all work together, throughout government and the private sector, to create the future, or it will be created for us. And it may not be the future we believe in or will want.

Second, find your tailwind and persist. Make the ask and put your goals out there to attract others to support you on your journey. Don’t take “no” personally. See “no” and resistance as a headwind. Find your tailwind and those mentors and sponsors who are bought into you, your success and what you can contribute in this terribly important field.

What are some of the most pressing issues facing AI as it evolves?

The U.S. is fortunate to lead the world in innovation by AI developers, and we therefore also have the responsibility to lead on policies that make AI safe and trustworthy and further our values. We are pursuing this in collaboration with other countries in several multilateral venues and bilaterally. USPTO has a long history of this kind of collaboration and leadership. To ensure American values are embedded into AI policy, our AI and Emerging Technology Partnership that we began in 2022 supports the Biden administration’s whole-of-government approach to AI, including the National AI Initiative, to advance U.S. leadership in AI. Most recently, we published guidance clarifying the level of human contribution needed for patenting AI-enabled inventions, promoting human ingenuity and incentivizing investment for AI-enabled innovations while not hindering future innovation by unnecessarily locking up innovation or stifling competition. To our knowledge, it’s the first such guidance in the world. We must achieve the same goals and balance when it comes to our creative sector, and we’re working with stakeholders and the Copyright Office to do so.

While we at USPTO are focused on harnessing AI to democratize and scale innovation, as well as policy at the intersection of AI and intellectual property, we’re also working with NIST and the National Telecommunications and Information Administration (NTIA) on other pressing issues, including the safe, secure and trustworthy development and use of AI and mechanisms that can create earned trust in AI.

What are some issues AI users should be aware of?

As President Biden stated in his executive order on AI, responsible AI use has the potential to help solve urgent challenges and make our world more prosperous, productive, innovative and secure, while irresponsible use could exacerbate societal harms “such as fraud, discrimination, bias and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.” AI users need to be thoughtful and deliberate in their use of AI so they do not perpetuate those harms. One key way is to stay abreast of the work NIST is doing through its AI Risk Management Framework and its U.S. AI Safety Institute.

What is the best way to responsibly build AI?

Together. To responsibly build AI, we need not only government intervention and policies, but also industry leadership. President Biden recognized this when he convened private AI companies and secured their voluntary commitments to manage the risks posed by AI. We in U.S. government also need your feedback as we do our work. We’re regularly seeking your input through public engagements as well as requests for information or comments we issue in the Federal Register. For example, through our AI and Emerging Technology Partnership, we sought your comments before designing our Inventorship Guidance for AI-Assisted Inventions. We’re using your comments in response to the Copyright Office’s request for information related to the intersection of copyright and AI to advise the Biden administration on national and international strategies. NIST asked for your input and information to support safe, secure and trustworthy development and use of AI and NTIA asked for your feedback on AI accountability. And we at USPTO will soon issue another request for comment to explore ways in which our patent laws may need to evolve to account for the way AI may influence other patentability factors or may create a minefield of “prior art,” making it harder to patent. The best thing you can do is stay tuned to the administration’s work on AI, including NIST’s, USPTO’s, NTIA’s and the Department of Commerce at large, and to provide your feedback so we can build responsible AI together.

How can investors better push for responsible AI?

Investors should do what they do best — invest in the work. Progress in responsible AI can’t come out of thin air; we need companies in this space doing the hard work to bring about the responsible AI companies of tomorrow. We need investors to ask the right questions, to push for responsible development, and to use their money to support the responsible AI of the future. Further, they should impress upon companies they invest in the need to prioritize IP protection, cybersecurity and not accepting investments from suspicious sources. All three are necessary to ensure control over the work and to ensure that work creates jobs and bolsters national security.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber