From Digital Age to Nano Age. WorldWide.

Tag: AI

Robotic Automations

Women in AI: Anna Korhonen studies the intersection between linguistics and AI | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Anna Korhonen is a professor of natural language processing (NLP) at the University of Cambridge. She’s also a senior research fellow at Churchill College, a fellow at the Association for Computational Linguistics, and a fellow at the European Laboratory for Learning and Intelligent Systems.

Korhonen previously served as a fellow at the Alan Turing Institute and she has a PhD in computer science and master’s degrees in both computer science and linguistics. She researches NLP and how to develop, adapt and apply computational techniques to meet the needs of AI. She has a particular interest in responsible and “human-centric” NLP that — in her own words — “draws on the understanding of human cognitive, social and creative intelligence.”

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I was always fascinated by the beauty and complexity of human intelligence, particularly in relation to human language. However, my interest in STEM subjects and practical applications led me to study engineering and computer science. I chose to specialize in AI because it’s a field that allows me to combine all these interests.

What work are you most proud of in the AI field?

While the science of building intelligent machines is fascinating, and one can easily get lost in the world of language modeling, the ultimate reason we’re building AI is its practical potential. I’m most proud of the work where my fundamental research on natural language processing has led into the development of tools that can support social and global good. For example, tools that can help us better understand how diseases such as cancer or dementia develop and can be treated, or apps that can support education.

Much of my current research is driven by the mission to develop AI that can improve human lives for the better. AI has a huge positive potential for social and global good. A big part of my job as an educator is to encourage the next generation of AI scientists and leaders to focus on realizing that potential.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

I’m fortunate to be working in an area of AI where we do have a sizable female population and established support networks. I’ve found these immensely helpful in navigating career and personal challenges.

For me, the biggest problem is how the male-dominated industry sets the agenda for AI. The current arms race to develop ever-larger AI models at any cost is a great example. This has a huge impact on the priorities of both academia and industry, and wide-ranging socioeconomic and environmental implications. Do we need larger models, and what are their global costs and benefits? I feel we would’ve asked these questions a lot earlier in the game if we had better gender balance in the field.

What advice would you give to women seeking to enter the AI field?

AI desperately needs more women at all levels, but especially at the level of leadership. The current leadership culture isn’t necessarily attractive for women, but active involvement can change that culture — and ultimately the culture of AI. Women are infamously not always great at supporting each other. I would really like to see an attitude change in this respect: We need to actively network and help each other if we want to achieve better gender balance in this field.

What are some of the most pressing issues facing AI as it evolves?

AI has developed incredibly fast: It has evolved from an academic field to a global phenomenon in less than a single decade. During this time, most effort has gone toward scaling through massive data and computation. Little effort has been devoted to thinking how this technology should be developed so that it can best serve humanity. People have a good reason to worry about the safety and trustworthiness of AI and its impact on jobs, democracy, environment and other areas. We need to urgently put human needs and safety at the center of AI development.

What are some issues AI users should be aware of?

Current AI, even when seeming highly fluent, ultimately lacks the world knowledge of humans and the ability to understand the complex social contexts and norms we operate with. Even the best of today’s technology makes mistakes, and our ability to prevent or predict those mistakes is limited. AI can be a very useful tool for many tasks, but I would not trust it to educate my children or make important decisions for me. We humans should remain in charge.

What is the best way to responsibly build AI?

Developers of AI tend to think about ethics as an afterthought — after the technology has already been built. The best way to think about it is before any development begins. Questions such as, “Do I have a diverse enough team to develop a fair system?” or “Is my data really free to use and representative of all the users’ populations?” or “Are my techniques robust?” should really be asked at the outset.

Although we can address some of this problem via education, we can only enforce it via regulation. The recent development of national and global AI regulations is important and needs to continue to guarantee that future technologies will be safer and more trustworthy.

How can investors better push for responsible AI?

AI regulations are emerging and companies will ultimately need to comply. We can think of responsible AI as sustainable AI truly worth investing in.


Software Development in Sri Lanka

Robotic Automations

How United Airlines uses AI to make flying the friendly skies a bit easier | TechCrunch


When you board a United Airlines plane, the gate agents, flight attendants and others involved in making sure your plane leaves on time are in a chatroom coordinating a lot of the work that you, as a passenger, will hopefully never notice. Is there still space for carry-on bags? Did the caterer bring the missing orange juice? Is there a way to seat a family together?

When a flight is delayed, a message with an explanation will arrive by text and in the United app. Most of the time, that message is generated by AI. Meanwhile, in offices around the world, dispatchers are looking at this real-time data to ensure that the crew can still legally fly the plane without running afoul with FAA regulations. And only a few weeks ago, United turned on its AI customer service chatbot.

Jason Birnbaum, who became United’s CIO in 2022, manages a team of over 1,500 employees and about 2,000 contractors who are responsible for all of the tech that makes this happen.

“What I love about our business is also is what you hate about the business,” he told me in a recent interview. “I was at GE for many years and the appliance business; we could go down for a day, I don’t think anyone would notice. They’d be: ‘All right, the dishwashers aren’t rolling off the line.’ But it wasn’t newsworthy. Now if something happens, even for 15 minutes, not only is it all over social media but the news trucks head out to the airport.”

Before joining United, Birnbaum spent 16 years at GE, moving up the ladder from technology manager to becoming the CIO of GE Consumer and Industrial, based in Budapest. In 2009, he became the CI of GE Healthcare Global Supply Chain. He joined United in 2015 as its SVP of Digital Technology, where he was responsible for launching projects like ConnectionSaver, one of United’s first AI/ML-based services that will proactively hold flights when fliers have tight connections (and that saved me from spending 12 hours at SFO last week).

I wanted to talk to Birnbaum about how he — and other CIOs at global enterprises — are thinking about the use of AI. That’s one area of innovation the airline is looking at. But before we could talk about AI, United is also still in the process of moving services into the cloud. If there’s one trend in cloud computing right now, it’s that everybody is trying to optimize their cloud infrastructure and spend less.

United Continental Airlines YR202 3490 (CAL) 737-800 BSI interior. Image Credits: United

“I’m starting to see these companies and startups that are, ‘How do you optimize your cloud, and how do you manage your cloud?’ There’s a lot of people focused on questions like, ‘You’ve got a lot of data, can I store it better for you?’ Or, ‘You’ve got a lot of new applications; can I help you monitor them better?’ Because all the tools you used to have don’t work anymore,” he said. Maybe the age of digital transformation is over, he said, and we’re now in the age of cloud optimization.

United itself has bet heavily on the cloud, specifically AWS as its preferred cloud provider. Unsurprisingly, United, too, is looking at how the company can optimize its cloud usage, from both a cost and reliability perspective. Like for so many companies that are going through this process, that also means looking at developer productivity and adding automation and DevOps practices into the mix. “We’re there. We have an established presence [in the cloud], but now we’re kind of in the market to try to continue to optimize as well,” Birnbaum said.

But that also comes back to reliability. Like all airlines, United still operates a lot of legacy systems — and they still work. “Frankly, we are extra careful as we move through this journey, to make sure we don’t disrupt the operation or create self-inflicted wounds,” he said.

United has already moved and turned off a lot of legacy systems, and that process is ongoing. Later this year, for example, the company will turn off a large Unisys-based system. But Birnbaum also thinks that United will continue to have on-prem systems. “I just want to be in the best places for the applications and for the user experience,” he said, whether that’s for performance, privacy or security reasons.

The one thing the company is not trying to build, though, is some kind of overarching United Platform that will run all of its systems. But there’s too much complexity in the day-to-day airline operations to do that, Birnbaum said. Some platforms manage reservations, ticketing and bag tracking, for example, while others handle crew assignments.

A worker in the United Airlines Station Operation Center at Newark Liberty International Airport in Newark, New Jersey. Image Credits: Angus Mordant/Bloomberg via Getty Images

When something goes wrong, those systems have to work together and in near real time. That’s also why United is betting on one cloud provider. “I don’t imagine we’ll have one platform,” Birnbaum said. “I think we’re going to get really good at connecting things and getting applications to talk to each other.”

In practice, that means that today it’s possible for the team to see when the caterer got off the plane and who has checked in for the flight, for example. And the ground teams and flight attendant crews can see all of that through their internal chat app, too.

Every flight has an AI story

While all of this work is still going on, United is also looking at how it can best leverage AI.

One story I regularly hear about AI/ML in large enterprises is that ChatGPT didn’t necessarily change how the technologists thought about it, but that it suddenly became a boardroom discussion. That also holds true for United.

“We had a pretty mature AI practice,” Birnbaum said when I asked him when he realized that generative AI was something the team had to pay attention to. “We built a lot of capabilities to manage models, to do tuning and all that. So the good news for us was that we had already made a pretty big investment in this capability. What changed [when ChatGPT arrived] was not that we had to take it seriously. It was who was asking about it: When the CEO and the board suddenly are saying: ‘Hey, I need to know more about this.’”

United is quite bullish on AI, Birnbaum said. “I think the travel industry has so many different examples of where AI can be used both for the customer and for the employees.” One of those is United’s “Every flight has a story.”

Not that long ago, it was rather typical to get a notification when a flight was delayed, but no further information about it. Maybe the incoming flight was delayed. Maybe there was a maintenance issue. A few years ago, United started using agents to write short notices that explained the delay and sent those out through its app and as text messages. Now, pulling in data from its chat app and other sources, the vast majority of these messages are written by AI.

Similarly, United is looking at also using generative AI to summarize flight information for its operations teams, so they can get a quick overview of what’s happening.

A United Airlines flight information board. Image Credits: Jim Vondruska/Getty Images

Just a few weeks ago, United fully moved its chat system on United.com to an AI agent, too. In my own tests, that system still felt quite limited, but it’s only a start, Birnbaum said.

Famously, Air Canada once used an AI bot that sometimes gave wrong answers, but Birnbaum said he wasn’t too worried about that. From a technical perspective, the bot draws upon United’s knowledge base, which should keep hallucinations under control. “But to me [the Air Canada incident] wasn’t a technology failure, that was a customer service failure because — and I won’t comment too much — but I would say that, today, our human agents give wrong answers, too. We just have to deal with that and move on. I think we’re very prepared for that situation,” Birnbaum said.

Later this year, United also plans to launch a tool that is currently called “Get Me Close.” Often, when there’s a delay, customers are willing to change their plans to switch to a nearby airport. I once had United switch me to a flight to Amsterdam when my flight to Berlin got canceled (not that close, but close enough to get a train and still moderate a keynote session the next morning).

“While our mobile tools are great — and they are excellent — when people go talk to humans, the interactions are usually more about building optionality. Meaning you’re going to say, ‘Well, your flight’s delayed’ and then someone might say, ‘Well could you get me to Philadelphia instead of New York? Could you get me close? We believe that interaction is a great use case for AI.”

AI for pilots?

After creating the system that automatically writes the delay “stories” in the app, Birnbaum’s team is now thinking about where it can use the same generative AI technology. One area: those short briefings pilots usually give before takeoff.

“A pilot actually came up to me and said, ‘One of the things that some pilots are great at is getting on that speaker and saying, “Hey, welcome, everybody going to Las Vegas, blah blah.”’ And he said, ‘Some pilots are introverted; could you have an AI engine that helps me generate an announcement on the plane about where I’m going so that I could give a really good announcement about what’s happening?’ And I thought that was a great use case.”

As it turns out, one of the main drivers of customer satisfaction for airlines is actually pilot interaction. A few years ago, United started focusing on its Net Promotor score and asked pilots to make announcements about delays while standing at the front of the cabin, for example. It makes sense for the airline to look at how it can improve upon such a crucial interaction — while hopefully still allowing for pilots to go off-script, too.

Another area where generative AI may help pilots is in summarizing complex technical documents. But as Birnbaum rightly noted, everything that involves the pilot flying the plane is heavily structured and regulated, so it’ll be a while before the airline will launch anything there.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Ewa Luger explores how AI affects culture — and vice versa | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Ewa Luger is co-director at the Institute of Design Informatics, and co-director of the Bridging Responsible AI Divides (BRAID) program, backed by the Arts and Humanities Research Council (AHRC). She works closely with policymakers and industry, and is a member of the U.K. Department for Culture, Media and Sport (DCMS) college of experts, a cohort of experts who provide scientific and technical advice to the DCMS.

Luger’s research explores social, ethical and interactional issues in the context of data-driven systems, including AI systems, with a particular interest in design, the distribution of power, spheres of exclusion, and user consent. Previously, she was a fellow at the Alan Turing Institute, served as a researcher at Microsoft, and was a fellow at Corpus Christi College at the University of Cambridge.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

After my PhD, I moved to Microsoft Research, where I worked in the user experience and design group in the Cambridge (U.K.) lab. AI was a core focus there, so my work naturally developed more fully into that area and expanded out into issues surrounding human-centered AI (e.g., intelligent voice assistants).

When I moved to the University of Edinburgh, it was due to a desire to explore issues of algorithmic intelligibility, which, back in 2016, was a niche area. I’ve found myself in the field of responsible AI and currently jointly lead a national program on the subject, funded by the AHRC.

What work are you most proud of in the AI field?

My most-cited work is a paper about the user experience of voice assistants (2016). It was the first study of its kind and is still highly cited. But the work I’m personally most proud of is ongoing. BRAID is a program I jointly lead, and is designed in partnership with a philosopher and ethicist. It’s a genuinely multidisciplinary effort designed to support the development of a responsible AI ecosystem in the U.K.

In partnership with the Ada Lovelace Institute and the BBC, it aims to connect arts and humanities knowledge to policy, regulation, industry and the voluntary sector. We often overlook the arts and humanities when it comes to AI, which has always seemed bizarre to me. When COVID-19 hit, the value of the creative industries was so profound; we know that learning from history is critical to avoid making the same mistakes, and philosophy is the root of the ethical frameworks that have kept us safe and informed within medical science for many years. Systems like Midjourney rely on artist and designer content as training data, and yet somehow these disciplines and practitioners have little to no voice in the field. We want to change that.

More practically, I’ve worked with industry partners like Microsoft and the BBC to co-produce responsible AI challenges, and we’ve worked together to find academics that can respond to those challenges. BRAID has funded 27 projects so far, some of which have been individual fellowships, and we have a new call going live soon.

We’re designing a free online course for stakeholders looking to engage with AI, setting up a forum where we hope to engage a cross-section of the population as well as other sectoral stakeholders to support governance of the work — and helping to explode some of the myths and hyperbole that surrounds AI at the moment.

I know that kind of narrative is what floats the current investment around AI, but it also serves to cultivate fear and confusion among those people who are most likely to suffer downstream harms. BRAID runs until the end of 2028, and in the next phase, we’ll be tackling AI literacy, spaces of resistance, and mechanisms for contestation and recourse. It’s a (relatively) large program at £15.9 million over six years, funded by the AHRC.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

That’s an interesting question. I’d start by saying that these issues aren’t solely issues found in industry, which is often perceived to be the case. The academic environment has very similar challenges with respect to gender equality. I’m currently co-director of an institute — Design Informatics — that brings together the school of design and the school of informatics, and so I’d say there’s a better balance both with respect to gender and with respect to the kinds of cultural issues that limit women reaching their full professional potential in the workplace.

But during my PhD, I was based in a male-dominated lab and, to a lesser extent, when I worked in industry. Setting aside the obvious effects of career breaks and caring, my experience has been of two interwoven dynamics. Firstly, there are much higher standards and expectations placed on women — for example, to be amenable, positive, kind, supportive, team-players and so on. Secondly, we’re often reticent when it comes to putting ourselves forward for opportunities that less-qualified men would quite aggressively go for. So I’ve had to push myself quite far out of my comfort zone on many occasions.

The other thing I’ve needed to do is to set very firm boundaries and learn when to say no. Women are often trained to be (and seen as) people pleasers. We can be too easily seen as the go-to person for the kinds of tasks that would be less attractive to your male colleagues, even to the extent of being assumed to be the tea-maker or note-taker in any meeting, irrespective of professional status. And it’s only really by saying no, and making sure that you’re aware of your value, that you ever end up being seen in a different light. It’s overly generalizing to say that this is true of all women, but it has certainly been my experience. I should say that I had a female manager while I was in industry, and she was wonderful, so the majority of sexism I’ve experienced has been within academia.

Overall, the issues are structural and cultural, and so navigating them takes effort — firstly in making them visible and secondly in actively addressing them. There are no simple fixes, and any navigation places yet more emotional labor on females in tech.

What advice would you give to women seeking to enter the AI field?

My advice has always been to go for opportunities that allow you to level up, even if you don’t feel that you’re 100% the right fit. Let them decline rather than you foreclosing opportunities yourself. Research shows that men go for roles they think they could do, but women only go for roles they feel they already can or are doing competently. Currently, there’s also a trend toward more gender awareness in the hiring process and among funders, although recent examples show how far we have to go.

If you look at U.K. Research and Innovation AI hubs, a recent high-profile, multi-million-pound investment, all of the nine AI research hubs announced recently are led by men. We should really be doing better to ensure gender representation.

What are some of the most pressing issues facing AI as it evolves?

Given my background, it’s perhaps unsurprising that I’d say that the most pressing issues facing AI are those related to the immediate and downstream harms that might occur if we’re not careful in the design, governance and use of AI systems.

The most pressing issue, and one that has been heavily under-researched, is the environmental impact of large-scale models. We might choose at some point to accept those impacts if the benefits of the application outweigh the risks. But right now, we’re seeing widespread use of systems like Midjourney run simply for fun, with users largely, if not completely, unaware of the impact each time they run a query.

Another pressing issue is how we reconcile the speed of AI innovations and the ability of the regulatory climate to keep up. It’s not a new issue, but regulation is the best instrument we have to ensure that AI systems are developed and deployed responsibly.

It’s very easy to assume that what has been called the democratization of AI — by this, I mean systems such as ChatGPT being so readily available to anyone — is a positive development. However, we’re already seeing the effects of generated content on the creative industries and creative practitioners, particularly regarding copyright and attribution. Journalism and news producers are also racing to ensure their content and brands are not affected. This latter point has huge implications for our democratic systems, particularly as we enter key election cycles. The effects could be quite literally world-changing from a geopolitical perspective. It also wouldn’t be a list of issues without at least a nod to bias.

What are some issues AI users should be aware of?

Not sure if this relates to companies using AI or regular citizens, but I’m assuming the latter. I think the main issue here is trust. I’m thinking, here, of the many students now using large language models to generate academic work. Setting aside the moral issues, the models are still not good enough for that. Citations are often incorrect or out of context, and the nuance of some academic papers is lost.

But this speaks to a wider point: You can’t yet fully trust generated text and so should only use those systems when the context or outcome is low risk. The obvious second issue is veracity and authenticity. As models become increasingly sophisticated, it’s going to be ever harder to know for sure whether it’s human or machine-generated. We haven’t yet developed, as a society, the requisite literacies to make reasoned judgments about content in an AI-rich media landscape. The old rules of media literacy apply in the interim: Check the source.

Another issue is that AI is not human intelligence, and so the models aren’t perfect — they can be tricked or corrupted with relative ease if one has a mind to.

What is the best way to responsibly build AI?

The best instruments we have are algorithmic impact assessments and regulatory compliance, but ideally, we’d be looking for processes that actively seek to do good rather than just seeking to minimize risk.

Going back to basics, the obvious first step is to address the composition of designers — ensuring that AI, informatics and computer science as disciplines attract women, people of color and representation from other cultures. It’s obviously not a quick fix, but we’d clearly have addressed the issue of bias earlier if it was more heterogeneous. That brings me to the issue of the data corpus, and ensuring that it’s fit-for-purpose and efforts are made to appropriately de-bias it.

Then there comes the need to train systems architects to be aware of moral and socio-technical issues — placing the same weight on these as we do the primary disciplines. Then we need to give systems architects more time and agency to consider and fix any potential issues. Then we come to the matter of governance and co-design, where stakeholders should be involved in the governance and conceptual design of the system. And finally, we need to thoroughly stress-test systems before they get anywhere near human subjects.

Ideally, we should also be ensuring that there are mechanisms in place for opt-out, contestation and recourse — though much of this is covered by emerging regulations. It seems obvious, but I’d also add that you should be prepared to kill a project that’s set to fail on any measure of responsibility. There’s often something of the fallacy of sunk costs at play here, but if a project isn’t developing as you’d hope, then raising your risk tolerance rather than killing it can result in the untimely death of a product.

The European Union’s recently adopted AI act covers much of this, of course.

How can investors better push for responsible AI?

Taking a step back here, it’s now generally understood and accepted that the whole model that underpins the internet is the monetization of user data. In the same way, much, if not all, of AI innovation is driven by capital gain. AI development in particular is a resource-hungry business, and the drive to be the first to market has often been described as an arms race. So, responsibility as a value is always in competition with those other values.

That’s not to say that companies don’t care, and there has also been much effort made by various AI ethicists to reframe responsibility as a way of actually distinguishing yourself in the field. But this feels like an unlikely scenario unless you’re a government or another public service. It’s clear that being the first to market is always going to be traded off against a full and comprehensive elimination of possible harms.

But coming back to the term responsibility. To my mind, being responsible is the least we can do. When we say to our kids that we’re trusting them to be responsible, what we mean is, don’t do anything illegal, embarrassing or insane. It’s literally the basement when it comes to behaving like a functioning human in the world. Conversely, when applied to companies, it becomes some kind of unreachable standard. You have to ask yourself, how is this even a discussion that we find ourselves having?

Also, the incentives to prioritize responsibility are pretty basic and relate to wanting to be a trusted entity while also not wanting your users to come to newsworthy harm. I say this because plenty of people at the poverty line, or those from marginalized groups, fall below the threshold of interest, as they don’t have the economic or social capital to contest any negative outcomes, or to raise them to public attention.

So, to loop back to the question, it depends on who the investors are. If it’s one of the big seven tech companies, then they’re covered by the above. They have to choose to prioritize different values at all times, and not only when it suits them. For the public or third sector, responsible AI is already aligned to their values, and so what they tend to need is sufficient experience and insight to help make the right and informed choices. Ultimately, to push for responsible AI requires an alignment of values and incentives.


Software Development in Sri Lanka

Robotic Automations

Why vector databases are having a moment as the AI hype cycle peaks | TechCrunch


Vector databases are all the rage, judging by the number of startups entering the space and the investors ponying up for a piece of the pie. The proliferation of large language models (LLMs) and the generative AI (GenAI) movement have created fertile ground for vector database technologies to flourish.

While traditional relational databases such as Postgres or MySQL are well-suited to structured data — predefined data types that can be filed neatly in rows and columns — this doesn’t work so well for unstructured data such as images, videos, emails, social media posts, and any data that doesn’t adhere to a predefined data model.

Vector databases, on the other hand, store and process data in the form of vector embeddings, which convert text, documents, images, and other data into numerical representations that capture the meaning and relationships between the different data points. This is perfect for machine learning, as the database stores data spatially by how relevant each item is to the other, making it easier to retrieve semantically similar data.

This is particularly useful for LLMs, such as OpenAI’s GPT-4, as it allows the AI chatbot to better understand the context of a conversation by analyzing previous similar conversations. Vector search is also useful for all manner of real-time applications, such as content recommendations in social networks or e-commerce apps, as it can look at what a user has searched for and retrieve similar items in a heartbeat. 

Vector search can also help reduce “hallucinations” in LLM applications, through providing additional information that might not have been available in the original training dataset.

“Without using vector similarity search, you can still develop AI/ML applications, but you would need to do more retraining and fine-tuning,” Andre Zayarni, CEO and co-founder of vector search startup Qdrant, explained to TechCrunch. “Vector databases come into play when there’s a large dataset, and you need a tool to work with vector embeddings in an efficient and convenient way.”

In January, Qdrant secured $28 million in funding to capitalize on growth that has led it to become one of the top 10 fastest growing commercial open source startups last year. And it’s far from the only vector database startup to raise cash of late — Vespa, Weaviate, Pinecone, and Chroma collectively raised $200 million last year for various vector offerings.

Qdrant founding team. Image Credits: Qdrant

Since the turn of the year, we’ve also seen Index Ventures lead a $9.5 million seed round into Superlinked, a platform that transforms complex data into vector embeddings. And a few weeks back, Y Combinator (YC) unveiled its Winter ’24 cohort, which included Lantern, a startup that sells a hosted vector search engine for Postgres.

Elsewhere, Marqo raised a $4.4 million seed round late last year, swiftly followed by a $12.5 million Series A round in February. The Marqo platform provides a full gamut of vector tools out of the box, spanning vector generation, storage, and retrieval, allowing users to circumvent third-party tools from the likes of OpenAI or Hugging Face, and it offers everything via a single API.

Marqo co-founders Tom Hamer and Jesse N. Clark previously worked in engineering roles at Amazon, where they realized the “huge unmet need” for semantic, flexible searching across different modalities such as text and images. And that is when they jumped ship to form Marqo in 2021.

“Working with visual search and robotics at Amazon was when I really looked at vector search — I was thinking about new ways to do product discovery, and that very quickly converged on vector search,” Clark told TechCrunch. “In robotics, I was using multi-modal search to search through a lot of our images to identify if there were errant things like hoses and packages. This was otherwise going to be very challenging to solve.”

Marqo co-founders Jesse Clark and Tom Hamer. Image Credits: Marqo

Enter the enterprise

While vector databases are having a moment amid the hullabaloo of ChatGPT and the GenAI movement, they’re not the panacea for every enterprise search scenario.

“Dedicated databases tend to be fully focused on specific use cases and hence can design their architecture for performance on the tasks needed, as well as user experience, compared to general-purpose databases, which need to fit it in the current design,” Peter Zaitsev, founder of database support and services company Percona, explained to TechCrunch.

While specialized databases might excel at one thing to the exclusion of others, this is why we’re starting to see database incumbents such as Elastic, Redis, OpenSearch, Cassandra, Oracle, and MongoDB adding vector database search smarts to the mix, as are cloud service providers like Microsoft’s Azure, Amazon’s AWS, and Cloudflare.

Zaitsev compares this latest trend to what happened with JSON more than a decade ago, when web apps became more prevalent and developers needed a language-independent data format that was easy for humans to read and write. In that case, a new database class emerged in the form of document databases such as MongoDB, while existing relational databases also introduced JSON support.

“I think the same is likely to happen with vector databases,” Zaitsev told TechCrunch. “Users who are building very complicated and large-scale AI applications will use dedicated vector search databases, while folks who need to build a bit of AI functionality for their existing application are more likely to use vector search functionality in the databases they use already.”

But Zayarni and his Qdrant colleagues are betting that native solutions built entirely around vectors will provide the “speed, memory safety, and scale” needed as vector data explodes, compared to the companies bolting vector search on as an afterthought.

“Their pitch is, ‘we can also do vector search, if needed,’” Zayarni said. “Our pitch is, ‘we do advanced vector search in the best way possible.’ It is all about specialization. We actually recommend starting with whatever database you already have in your tech stack. At some point, users will face limitations if vector search is a critical component of your solution.”


Software Development in Sri Lanka

Robotic Automations

Women in AI: Allison Cohen on building responsible AI projects | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Allison Cohen is the senior applied AI projects manager at Mila, a Quebec-based community of more than 1,200 researchers specializing in AI and machine learning. She works with researchers, social scientists and external partners to deploy socially beneficial AI projects. Cohen’s portfolio of work includes a tool that detects misogyny, an app to identify online activity from suspected human trafficking victims, and an agricultural app to recommend sustainable farming practices in Rwanda.

Previously, Cohen was a co-lead on AI drug discovery at the Global Partnership on Artificial Intelligence, an organization to guide the responsible development and use of AI. She’s also served as an AI strategy consultant at Deloitte and a project consultant at the Center for International Digital Policy, an independent Canadian think tank.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

The realization that we could mathematically model everything from recognizing faces to negotiating trade deals changed the way I saw the world, which is what made AI so compelling to me. Ironically, now that I work in AI, I see that we can’t — and in many cases shouldn’t — be capturing these kinds of phenomena with algorithms.

I was exposed to the field while I was completing a master’s in global affairs at the University of Toronto. The program was designed to teach students to navigate the systems affecting the world order — everything from macroeconomics to international law to human psychology. As I learned more about AI, though, I recognized how vital it would become to world politics, and how important it was to educate myself on the topic.

What allowed me to break into the field was an essay-writing competition. For the competition, I wrote a paper describing how psychedelic drugs would help humans stay competitive in a labor market riddled with AI, which qualified me to attend the St. Gallen Symposium in 2018 (it was a creative writing piece). My invitation, and subsequent participation in that event, gave me the confidence to continue pursuing my interest in the field.

What work are you most proud of in the AI field?

One of the projects I managed involved building a dataset containing instances of subtle and overt expressions of bias against women.

For this project, staffing and managing a multidisciplinary team of natural language processing experts, linguists and gender studies specialists throughout the entire project life cycle was crucial. It’s something that I’m quite proud of. I learned firsthand why this process is fundamental to building responsible applications, and also why it’s not done enough — it’s hard work! If you can support each of these stakeholders in communicating effectively across disciplines, you can facilitate work that blends decades-long traditions from the social sciences and cutting-edge developments in computer science.

I’m also proud that this project was well received by the community. One of our papers got a spotlight recognition in the socially responsible language modeling workshop at one of the leading AI conferences, NeurIPS. Also, this work inspired a similar interdisciplinary process that was managed by AI Sweden, which adapted the work to fit Swedish notions and expressions of misogyny.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

It’s unfortunate that in such a cutting-edge industry, we’re still seeing problematic gender dynamics. It’s not just adversely affecting women — all of us are losing. I’ve been quite inspired by a concept called “feminist standpoint theory” that I learned about in Sasha Costanza-Chock’s book, “Design Justice.” \

The theory claims that marginalized communities, whose knowledge and experiences don’t benefit from the same privileges as others, have an awareness of the world that can bring about fair and inclusive change. Of course, not all marginalized communities are the same, and neither are the experiences of individuals within those communities.

That said, a variety of perspectives from those groups are critical in helping us navigate, challenge and dismantle all kinds of structural challenges and inequities. That’s why a failure to include women can keep the field of AI exclusionary for an even wider swath of the population, reinforcing power dynamics outside of the field as well.

In terms of how I’ve handled a male-dominated industry, I’ve found allies to be quite important. These allies are a product of strong and trusting relationships. For example, I’ve been very fortunate to have friends like Peter Kurzwelly, who’s shared his expertise in podcasting to support me in the creation of a female-led and -centered podcast called “The World We’re Building.” This podcast allows us to elevate the work of even more women and non-binary people in the field of AI.

What advice would you give to women seeking to enter the AI field?

Find an open door. It doesn’t have to be paid, it doesn’t have to be a career and it doesn’t even have to be aligned with your background or experience. If you can find an opening, you can use it to hone your voice in the space and build from there. If you’re volunteering, give it your all — it’ll allow you to stand out and hopefully get paid for your work as soon as possible.

Of course, there’s privilege in being able to volunteer, which I also want to acknowledge.

When I lost my job during the pandemic and unemployment was at an all-time high in Canada, very few companies were looking to hire AI talent, and those that were hiring weren’t looking for global affairs students with eight months’ experience in consulting. While applying for jobs, I began volunteering with an AI ethics organization.

One of the projects I worked on while volunteering was about whether there should be copyright protection for art produced by AI. I reached out to a lawyer at a Canadian AI law firm to better understand the space. She connected me with someone at CIFAR, who connected me with Benjamin Prud’homme, the executive director of Mila’s AI for Humanity Team. It’s amazing to think that through a series of exchanges about AI art, I learned about a career opportunity that has since transformed my life.

What are some of the most pressing issues facing AI as it evolves?

I have three answers to this question that are somewhat interconnected. I think we need to figure out:

  1. How to reconcile the fact that AI is built to be scaled while ensuring that the tools we’re building are adapted to fit local knowledge, experience and needs.
  2. If we’re to build tools that are adapted to the local context, we’re going to need to incorporate anthropologists and sociologists into the AI design process. But there are a plethora of incentive structures and other obstacles preventing meaningful interdisciplinary collaboration. How can we overcome this?
  3. How can we affect the design process even more profoundly than simply incorporating multidisciplinary expertise? Specifically, how can we alter the incentives such that we’re designing tools built for those who need it most urgently rather than those whose data or business is most profitable?

What are some issues AI users should be aware of?

Labor exploitation is one of the issues that I don’t think gets enough coverage. There are many AI models that learn from labeled data using supervised learning methods. When the model relies on labeled data, there are people that have to do this tagging (i.e., someone adds the label “cat” to an image of a cat). These people (annotators) are often the subjects of exploitative practices. For models that don’t require the data to be labeled during the training process (as is the case with some generative AI and other foundation models), datasets can still be built exploitatively in that the developers often don’t obtain consent nor provide compensation or credit to the data creators.

I would recommend checking out the work of Krystal Kauffman, who I was so glad to see featured in this TechCrunch series. She’s making headway in advocating for annotators’ labor rights, including a living wage, the end to “mass rejection” practices, and engagement practices that align with fundamental human rights (in response to developments like intrusive surveillance).

What is the best way to responsibly build AI?

Folks often look to ethical AI principles in order to claim that their technology is responsible. Unfortunately, ethical reflection can only begin after a number of decisions have already been made, including but not limited to:

  1. What are you building?
  2. How are you building it?
  3. How will it be deployed?

If you wait until after these decisions have been made, you’ll have missed countless opportunities to build responsible technology.

In my experience, the best way to build responsible AI is to be cognizant of — from the earliest stages of your process — how your problem is defined and whose interests it satisfies; how the orientation supports or challenges pre-existing power dynamics; and which communities will be empowered or disempowered through the AI’s use.

If you want to create meaningful solutions, you must navigate these systems of power thoughtfully.

How can investors better push for responsible AI?

Ask about the team’s values. If the values are defined, at least, in part, by the local community and there’s a degree of accountability to that community, it’s more likely that the team will incorporate responsible practices.


Software Development in Sri Lanka

Robotic Automations

This Week in AI: When 'open source' isn't so open | TechCrunch


Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week, Meta released the latest in its Llama series of generative AI models: Llama 3 8B and Llama 3 70B. Capable of analyzing and writing text, the models are “open sourced,” Meta said — intended to be a “foundational piece” of systems that developers design with their unique goals in mind.

“We believe these are the best open source models of their class, period,” Meta wrote in a blog post. “We are embracing the open source ethos of releasing early and often.”

There’s only one problem: the Llama 3 models aren’t really “open source,” at least not in the strictest definition.

Open source implies that developers can use the models how they choose, unfettered. But in the case of Llama 3 — as with Llama 2 — Meta has imposed certain licensing restrictions. For example, Llama models can’t be used to train other models. And app developers with over 700 million monthly users must request a special license from Meta. 

Debates over the definition of open source aren’t new. But as companies in the AI space play fast and loose with the term, it’s injecting fuel into long-running philosophical arguments.

Last August, a study co-authored by researchers at Carnegie Mellon, the AI Now Institute and the Signal Foundation found that many AI models branded as “open source” come with big catches — not just Llama. The data required to train the models is kept secret. The compute power needed to run them is beyond the reach of many developers. And the labor to fine-tune them is prohibitively expensive.

So if these models aren’t truly open source, what are they, exactly? That’s a good question; defining open source with respect to AI isn’t an easy task.

One pertinent unresolved question is whether copyright, the foundational IP mechanism open source licensing is based on, can be applied to the various components and pieces of an AI project, in particular a model’s inner scaffolding (e.g. embeddings). Then, there’s the mismatch between the perception of open source and how AI actually functions to overcome: open source was devised in part to ensure that developers could study and modify code without restrictions. With AI, though, which ingredients you need to do the studying and modifying is open to interpretation.

Wading through all the uncertainty, the Carnegie Mellon study does make clear the harm inherent in tech giants like Meta co-opting the phrase “open source.”

Often, “open source” AI projects like Llama end up kicking off news cycles — free marketing — and providing technical and strategic benefits to the projects’ maintainers. The open source community rarely sees these same benefits, and, when they do, they’re marginal compared to the maintainers’.

Instead of democratizing AI, “open source” AI projects — especially those from big tech companies — tend to entrench and expand centralized power, say the study’s co-authors. That’s good to keep in mind the next time a major “open source” model release comes around.

Here are some other AI stories of note from the past few days:

  • Meta updates its chatbot: Coinciding with the Llama 3 debut, Meta upgraded its AI chatbot across Facebook, Messenger, Instagram and WhatsApp — Meta AI — with a Llama 3-powered backend. It also launched new features, including faster image generation and access to web search results.
  • AI-generated porn: Ivan writes about how the Oversight Board, Meta’s semi-independent policy council, is turning its attention to how the company’s social platforms are handling explicit, AI-generated images.
  • Snap watermarks: Social media service Snap plans to add watermarks to AI-generated images on its platform. A translucent version of the Snap logo with a sparkle emoji, the new watermark will be added to any AI-generated image exported from the app or saved to the camera roll.
  • The new Atlas: Hyundai-owned robotics company Boston Dynamics has unveiled its next-generation humanoid Atlas robot, which, in contrast to its hydraulics-powered predecessor, is all-electric — and much friendlier in appearance.
  • Humanoids on humanoids: Not to be outdone by Boston Dynamics, the founder of Mobileye, Amnon Shashua, has launched a new startup, Menteebot, focused on building bibedal robotics systems. A demo video shows Menteebot’s prototype walking over to a table and picking up fruit.
  • Reddit, translated: In an interview with Amanda, Reddit CPO Pali Bhat revealed that an AI-powered language translation feature to bring the social network to a more global audience is in the works, along with an assistive moderation tool trained on Reddit moderators’ past decisions and actions.
  • AI-generated LinkedIn content: LinkedIn has quietly started testing a new way to boost its revenues: a LinkedIn Premium Company Page subscription, which — for fees that appear to be as steep as $99/month — include AI to write content and a suite of tools to grow follower counts.
  • A Bellwether: Google parent Alphabet’s moonshot factory, X, this week unveiled Project Bellwether, its latest bid to apply tech to some of the world’s biggest problems. Here, that means using AI tools to identify natural disasters like wildfires and flooding as quickly as possible.
  • Protecting kids with AI: Ofcom, the regulator charged with enforcing the U.K.’s Online Safety Act, plans to launch an exploration into how AI and other automated tools can be used to proactively detect and remove illegal content online, specifically to shield children from harmful content.
  • OpenAI lands in Japan: OpenAI is expanding to Japan, with the opening of a new Tokyo office and plans for a GPT-4 model optimized specifically for the Japanese language.

More machine learnings

Image Credits: DrAfter123 / Getty Images

Can a chatbot change your mind? Swiss researchers found that not only can they, but if they are pre-armed with some personal information about you, they can actually be more persuasive in a debate than a human with that same info.

“This is Cambridge Analytica on steroids,” said project lead Robert West from EPFL. The researchers suspect the model — GPT-4 in this case — drew from its vast stores of arguments and facts online to present a more compelling and confident case. But the outcome kind of speaks for itself. Don’t underestimate the power of LLMs in matters of persuasion, West warned: “In the context of the upcoming US elections, people are concerned because that’s where this kind of technology is always first battle tested. One thing we know for sure is that people will be using the power of large language models to try to swing the election.”

Why are these models so good at language anyway? That’s one area there is a long history of research into, going back to ELIZA. If you’re curious about one of the people who’s been there for a lot of it (and performed no small amount of it himself), check out this profile on Stanford’s Christopher Manning. He was just awarded the John von Neuman Medal; congrats!

In a provocatively titled interview, another long-term AI researcher (who has graced the TechCrunch stage as well), Stuart Russell, and postdoc Michael Cohen speculate on “How to keep AI from killing us all.” Probably a good thing to figure out sooner rather than later! It’s not a superficial discussion, though — these are smart people talking about how we can actually understand the motivations (if that’s the right word) of AI models and how regulations ought to be built around them.

The interview is actually regarding a paper in Science published earlier this month, in which they propose that advanced AIs capable of acting strategically to achieve their goals, which they call  “long-term planning agents,” may be impossible to test. Essentially, if a model learns to “understand” the testing it must pass in order to succeed, it may very well learn ways to creatively negate or circumvent that testing. We’ve seen it at a small scale, why not a large one?

Russell proposes restricting the hardware needed to make such agents… but of course, Los Alamos and Sandia National Labs just got their deliveries. LANL just had the ribbon-cutting ceremony for Venado, a new supercomputer intended for AI research, composed of 2,560 Grace Hopper Nvidia chips.

Researchers look into the new neuromorphic computer.

And Sandia just received “an extraordinary brain-based computing system called Hala Point,” with 1.15 billion artificial neurons, built by Intel and believed to be the largest such system in the world. Neuromorphic computing, as it’s called, isn’t intended to replace systems like Venado, but to pursue new methods of computation that are more brain-like than the rather statistics-focused approach we see in modern models.

“With this billion-neuron system, we will have an opportunity to innovate at scale both new AI algorithms that may be more efficient and smarter than existing algorithms, and new brain-like approaches to existing computer algorithms such as optimization and modeling,” said Sandia researcher Brad Aimone. Sounds dandy… just dandy!


Software Development in Sri Lanka

Robotic Automations

Exclusive: Simbian brings AI to existing security tools


Ambuj Kumar is nothing if not ambitious.

An electrical engineer by training, Kumar led hardware design for eight years at Nvidia, helping to develop tech including a widely used high-speed memory controller for GPUs. After leaving Nvidia in 2010, Kumar pivoted to cybersecurity, eventually co-founding Fortanix, a cloud data security platform.

It was while heading up Fortanix that the idea for Kumar’s next venture came to him: an AI-powered tool to automate a company’s cybersecurity workflows, inspired by challenges he observed in the cybersecurity industry.

“Security leaders are stressed,” Kumar told TechCrunch. “CISOs don’t last more than a couple of years on average, and security analysts have some of the highest churn. And things are getting worse.”

Kumar’s solution, which he co-founded with former Twitter software engineer Alankrit Chona, is Simbian, a cybersecurity platform that effectively controls other cybersecurity platforms as well as security apps and tooling. Leveraging AI, Simbian can automatically orchestrate and operate existing security tools, finding the right configurations for each product by taking into account a company’s priorities and thresholds for security, informed by their business requirements.

With Simbian’s chatbot-like interface, users can type in a cybersecurity goal in natural language, then have Simbian provide personalized recommendations and generate what Kumar describes as “automated actions” to execute the actions (as best it can).

“Security companies have focused on making their own products better, which leads to a very fragmented industry,” Kumar said. “This results in a higher operational burden for organizations.”

To Kumar’s point, polls show that cybersecurity budgets are often wasted on an overabundance of tools. More than half of businesses feel that they’ve misspent around 50% of their budgets and still can’t remediate threats, according to one survey cited by Forbes. A separate study found that organizations now juggle on average 76 security tools, leading IT teams and leaders to feel overwhelmed.

“Security has been a cat-and-mouse game between attackers and defenders for a long time; the attack surface keeps growing due to IT growth,” Kumar said, adding that there’s “not enough talent to go around.” (One recent survey from Cybersecurity Ventures, a security-focused VC firm, estimates that the shortfall of cyber experts will reach 3.5 million people by 2025.)

In addition to automatically configuring a company’s security tools, the Simbian platform attempts to respond to “security events” by letting customers steer security while taking care of lower-level details. This, Kumar says, can significantly cut down on the number of alerts a security analyst must respond to.

But that assumes Simbian’s AI doesn’t make mistakes, a tall order, given that it’s well established that AI is error-prone.

To minimize the potential for off-the-rails behavior, Simbian’s AI was trained using a crowdsourcing approach — a game on its website called “Are you smarter than an LLM?” — that tasked volunteers with trying to “trick” the AI into doing the wrong thing. Kumar explained that Simbian used this learning, along with in-house researchers, to “ensure the AI does the right thing in its use cases.”

This means that Simbian effectively outsourced part of its AI training to unpaid gamers. But, to be fair, it’s unclear how many people actually played the company’s game; Kumar wouldn’t say.

There are privacy implications of a system that controls other systems, especially concerning those that are security-related. Would companies — and vendors, for that matter — be comfortable with sensitive data funneling through a single, AI-controlled centralized portal?

Kumar claims that every attempt has been made to protect against data compromise. Simbian uses encryption — customers control the encryption keys — and customers can delete their data at any time.

“As a customer, you have full control,” he said.

While Simbian isn’t the only platform to attempt to apply a layer of AI over existing security tools — Nexusflow offers a product along a similar vein — it appears to have won over investors. The company recently raised $10 million from investors including Coinbase board member Gokul Rajaram, Cota Capital partner Aditya Singh, Icon Ventures, Firebolt and Rain Capital.

“Cybersecurity is one of the most important problems of our time, and has famously fragmented ecosystem with thousands of vendors,” Rajaram told TechCrunch via email. “Companies have tried to build expertise around specific products and problems. I applaud Simbian’s method of building an integrated platform that would understand and operate all of security. While this is extremely challenging approach from technology perspective, I’ll put my money — and I did put my money — on Simbian. It’s the team with unique experience all the way from hardware to cloud.”

Mountain View-based Simbian, which has 15 employees, plans to put the bulk of the capital it’s raised toward product development. Kumar’s aiming to double the size of the startup’s workforce by the end of the year.


Software Development in Sri Lanka

Robotic Automations

Humane’s Ai Pin considers life beyond the smartphone | TechCrunch


Nothing lasts forever. Nowhere is the truism more apt than in consumer tech. This is a land inhabited by the eternally restless — always on the make for the next big thing. The smartphone has, by all accounts, had a good run. Seventeen years after the iPhone made its public debut, the devices continue to reign. Over the last several years, however, the cracks have begun to show.

The market plateaued, as sales slowed and ultimately contracted. Last year was punctuated by stories citing the worst demand in a decade, leaving an entire industry asking the same simple question: what’s next? If there was an easy answer, a lot more people would currently be a whole lot richer.

Smartwatches have had a moment, though these devices are largely regarded as accessories, augmenting the smartphone experience. As for AR/VR, the best you can really currently say is that — after a glacial start — the jury is still very much out on products like the Meta Quest and Apple Vision Pro.

When it began to tease its existence through short, mysterious videos in the summer of 2022, Humane promised a glimpse of the future. The company promised an approach every bit as human-centered as its name implied. It was, at the very least, well-funded, to the tune of $100 million+ (now $230 million), and featured an AI element.

The company’s first product, the Humane Ai Pin, arrives this week. It suggests a world where being plugged in doesn’t require having one’s eyes glued to a screen in every waking moment. It’s largely — but not wholly — hands-free. A tap to the front touch panel wakes up the system. Then it listens — and learns.

Beyond the smartphone

Image Credits: Darrell Etherington/TechCrunch

Humane couldn’t ask for better timing. While the startup has been operating largely in stealth for the past seven years, its market debut comes as the trough of smartphone excitement intersects with the crest of generative AI hype. The company’s bona fides contributed greatly to pre-launch excitement. Founders Bethany Bongiorno and Imran Chaudhri were previously well-placed at Apple. OpenAI’s Sam Altman, meanwhile, was an early and enthusiastic backer.

Excitement around smart assistants like Siri, Alexa and Google Home began to ebb in the last few years, but generative AI platforms like OpenAI’s ChatGPT and Google’s Gemini have flooded that vacuum. The world is enraptured with plugging a few prompts into a text field and watching as the black box spits out a shiny new image, song or video. It’s novel enough to feel like magic, and consumers are eager to see what role it will play in our daily lives.

That’s the Ai Pin’s promise. It’s a portal to ChatGPT and its ilk from the comfort of our lapels, and it does this with a meticulous attention to hardware design befitting its founders’ origins.

Press coverage around the startup has centered on the story of two Apple executives having grown weary of the company’s direction — or lack thereof. Sure, post-Steve Jobs Apple has had successes in the form of the Apple Watch and AirPods, but while Tim Cook is well equipped to create wealth, he’s never been painted as a generational creative genius like his predecessor.

If the world needs the next smartphone, perhaps it also needs the next Apple to deliver it. It’s a concept Humane’s founders are happy to play into. The story of the company’s founding, after all, originates inside the $2.6 trillion behemoth.

Start spreading the news

Image Credits: Alexander Spatari (opens in a new window) / Getty Images

In late March, TechCrunch paid a visit to Humane’s New York office. The feeling was tangibly different than our trip to the company’s San Francisco headquarters in the waning months of 2023. The earlier event buzzed with the manic energy of an Apple Store. It was controlled and curated, beginning with a small presentation from Bongiorno and Chaudhri, and culminating in various stations staffed by Humane employees designed to give a crash course on the product’s feature set and origins.

Things in Manhattan were markedly subdued by comparison. The celebratory buzz that accompanies product launches has dissipated into something more formal, with employees focused on dotting I’s and crossing T’s in the final push before product launch. The intervening months provided plenty of confirmation that the Ai Pin wasn’t the only game in town.

January saw the Rabbit R1’s CES launch. The startup opted for a handheld take on generative AI devices. The following month, Samsung welcomed customers to “the era of Mobile AI.”  The “era of generative AI” would have been more appropriate, as the hardware giant leveraged a Google Gemini partnership aimed at relegating its bygone smart assistant Bixby to a distant memory. Intel similarly laid claim to the “AI PC,” while in March Apple confidently labeled the MacBook Air the “world’s best consumer laptop for AI.”

At the same time, Humane’s news standing stumbled through reports of a small layoff round and small delay in preorder fulfillment. Both can be written off as products of immense difficulties around launching a first-generation hardware product — especially under the intense scrutiny few startups see.

For the second meeting with Bongiorno and Chaudhri, we gathered around a conference table. The first goal was an orientation with the device, ahead of review. I’ve increasingly turned down these sorts of meeting requests post-pandemic, but the Ai Pin represents a novel enough paradigm to justify a sit-down orientation with the device. Humane also sent me home with a 30-minute intro video designed to familiarize users — not the sort of thing most folks require when, say, upgrading a phone.

More interesting to me, however, was the prospect of sitting down with the founders for the sort of wide-ranging interview we weren’t able to do during last year’s San Francisco event. Now that most of the mystery is gone, Chaudhri and Bongiorno were more open about discussing the product — and company — in-depth.

Origin story

Humane co-founders Bethany Bongiorno and Imran Chaudhri.

One Infinite Loop is the only place one can reasonably open the Humane origin story. The startup’s founders met on Bongiorno’s first day at Apple in 2008, not long after the launch of the iPhone App Store. Chaudhri had been at the company for 13 years at that point, having joined at the depths of the company’s mid-90s struggles. Jobs would return to the company two years later, following its acquisition of NeXT.

Chaudhri’s 22 years with the company saw him working as director of Design on both the hardware and software sides of projects like the Mac and iPhone. Bongiorno worked as project manager for iOS, macOS and what would eventually become iPadOS. The pair married in 2016 and left Apple the same year.

“We began our new life,” says Bongiorno, “which involves thinking a lot about where the industry was going and what we were passionate about.” The pair started consulting work. However, Bongiorno describes a seemingly mundane encounter that would change their trajectory soon after.

Image Credits: Humane

“We had gone to this dinner, and there was a family sitting next to us,” she says. “There were three kids and a mom and dad, and they were on their phones the entire time. It really started a conversation about the incredible tool we built, but also some of the side effects.”

Bongiorno adds that she arrived home one day in 2017 to see Chaudhri pulling apart electronics. He had also typed out a one-page descriptive vision for the company that would formally be founded as Humane later the same year.

According to Bongiorno, Humane’s first hardware device never strayed too far from Chaudhri’s early mockups. “The vision is the same as what we were pitching in the early days,” she says. That’s down to Ai Pin’s most head-turning feature, a built-in projector that allows one to use the surface of their hand as a kind of makeshift display. It’s a tacit acknowledgement that, for all of the talk about the future of computing, screens are still the best method for accomplishing certain tasks.

Much of the next two years were spent exploring potential technologies and building early prototypes. In 2018, the company began discussing the concept with advisors and friends, before beginning work in earnest the following year.

Staring at the sun

In July 2022, Humane tweeted, “It’s time for change, not more of the same.” The message, which reads as much like a tagline as a mission statement, was accompanied by a minute-long video. It opens in dramatic fashion on a rendering of an eclipse. A choir sings in a bombastic — almost operatic — fashion, as the camera pans down to a crowd. As the moon obscures the sunlight, their faces are illuminated by their phone screens. The message is not subtle.

The crowd opens to reveal a young woman in a tank top. Her head lifts up. She is now staring directly into the eclipse (not advised). There are lyrics now, “If I had everything, I could change anything,” as she pushes forward to the source of the light. She holds her hand to the sky. A green light illuminates her palm in the shape of the eclipse. This last bit is, we’ll soon discover, a reference to the Ai Pin’s projector. The marketing team behind the video is keenly aware that, while it’s something of a secondary feature, it’s the most likely to grab public attention.

As a symbol, the eclipse has become deeply ingrained in the company’s identity. The green eclipse on the woman’s hand is also Humane’s logo. It’s built into the Ai Pin’s design language, as well. A metal version serves as the connection point between the pin and its battery packs.

Image Credits: Brian Heater

The company is so invested in the motif that it held an event on October 14, 2023, to coincide with a solar eclipse. The device comes in three colors: Eclipse, Equinox and Lunar, and it’s almost certainly no coincidence that this current big news push is happening a mere days after another North American solar eclipse.

However, it was on the runway of a Paris fashion show in September that the Ai Pin truly broke cover. The world got its first good look at the product as it was magnetically secured to the lapels of models’ suit jackets. It was a statement, to be sure. Though its founders had left Apple a half-dozen years prior, they were still very much invested in industrial design, creating a product designed to be a fashion accessory (your mileage will vary).

The design had evolved somewhat since conception. For one thing, the top of the device, which houses the sensors and projector, is now angled downward, so the Pin’s vantage point is roughly the same as its wearer. An earlier version with a flatter service would unintentionally angle the pin upward when worn on certain chest types. Nailing down a more universal design required a lot of trial and error with a lot of different people in different shapes and sizes.

“There’s an aspect of this particular hardware design that has to be compassionate to who’s using it,” says Chaudhri. “It’s very different when you have a handheld aspect. It feels more like an instrument or a tool […] But when you start to have a more embodied experience, the design of the device has to be really understanding of who’s wearing it. That’s where the compassion comes from.”

Year of the Rabbit?

Image Credits: rabbit

Then came competition. When it was unveiled at CES on January 9, the Rabbit R1 stole the show.

“The phone is an entertainment device, but if you’re trying to get something done it’s not the highest efficiency machine,” CEO and founder Jesse Lyu noted at the time. “To arrange dinner with a colleague we needed four-five different apps to work together. Large language models are a universal solution for natural language, we want a universal solution for these services — they should just be able to understand you.”

While the R1’s product design is novel in its own right, it’s arguably a more traditional piece of consumer electronics than Ai Pin. It’s handheld and has buttons and a screen. At its heart, however, the functionality is similar. Both are designed to supplement smartphone usage and are built around a core of LLM-trained AI.

The device’s price point also contributed to its initial buzz. At $200, it’s a fraction of the Ai Pin’s $699 starting price. The more familiar form factor also likely comes with a smaller learning curve than Humane’s product.

Asked about the device, Bongiorno makes the case that another competitor only validates the space. “I think it’s exciting that we kind of sparked this new interest in hardware,” she says. “I think it’s awesome. Fellow builders. More of that, please.”

She adds, however, that the excitement wasn’t necessarily there at Humane from the outset. “We talked about it internally at the company. Of course people were nervous. They were like, ‘what does this mean?’ Imran and I got in front of the company and said, ‘guys, if there weren’t people who followed us, that means we’re not doing the right thing. Then something’s wrong.”

Bongiorno further suggests that Rabbit is focused on a different use case, as its product requires focus similar to that of a smartphone — though both Bongiorno and Chaudhri have yet to use the R1.

A day after Rabbit unveiled the product, Humane confirmed that it had laid off 10 employees — amounting to 4% of its workforce. It’s a small fraction of a company with a small headcount, but the timing wasn’t great, a few months ahead of the product’s official launch. The news also found its long-time CTO, Patrick Gates, exiting the C-suite role for an advisory job.

“The honest truth is we’re a company that is constantly going through evolution,” Bongiorno says of the layoffs. “If you think about where we were five years ago, we were in R&D. Now we are a company that’s about to ship to customers, that’s about to have to operate in a different way. Like every growing and evolving company, changes are going to happen. It’s actually really healthy and important to go through that process.”

The following month, the company announced that its pins would now be shipping in mid-April. It was a slight delay from the original March ship date, though Chaudhri offers something of a Bill Clinton-style “it depends on what your definition of ‘is’ is” answer. The company, he suggests, defines “shipping” as leaving the factory, rather than the more industry-standard definition of shipping to customers.

“We said we were shipping in March and we are shipping in March,” he says. The devices leave the factory. The rest is on the U.S. government and how long they take when they hold things in place — tariffs and regulations and other stuff.”

Money moves

Image Credits: Brian Heater

No one invests $230 million in a startup out of the goodness of their heart. Sooner or later, backers will be looking for a return. Integral to Humane’s path to positive cashflow is a subscription service that’s required to use the thing. The $699 price tag comes with 90 days free, then after that, you’re on the hook for $24 a month.

That fee brings talk, text and data from T-Mobile, cloud storage and — most critically — access to the Ai Bus, which is foundational to the device’s operation. Humane describes it thusly, “An entirely new AI software framework, the Ai Bus, brings Ai Pin to life and removes the need to download, manage, or launch apps. Instead, it quickly understands what you need, connecting you to the right AI experience or service instantly.”

Investors, of course, love to hear about subscriptions. Hell, even Apple relies on service revenue for growth as hardware sales have slowed.

Bongiorno alludes to internal projections for revenue, but won’t go into specifics for the timeline. She adds that the company has also discussed an eventual path to IPO even at this early stage in the process.

“If we weren’t, that would not be responsible for any company,” she says. “These are things that we care deeply about. Our vision for Humane from the beginning was that we wanted to build a company where we could build a lot of things. This is our first product, and we have a large roadmap that Imran is really passionate about of where we want to go.”

Chaudhri adds that the company “graduated beyond sketches” for those early products. “We’ve got some early photos of things that we’re thinking about, some concept pieces and some stuff that’s a lot more refined than those sketches when it was a one-man team. We are pretty passionate about the AI space and what it actually means to productize AI.”




Software Development in Sri Lanka

Robotic Automations

Amazon, eyeing up AI, adds Andrew Ng to its board — ex-MTV exec McGrath to step down | TechCrunch


If the decisions made by corporate boards of directors can indicate where a company wants to be focusing, Amazon’s board just made an interesting move. The company announced on Thursday that Andrew Ng, known for building AI at large tech companies, is joining its board of directors. The company also said that Judy McGrath — best known for her work as a long-time TV executive, running MTV and helping Viacom become a media powerhouse — will be stepping down as a director.

Taken together, the two moves sketch out an interesting picture of the tech giant’s intentions.

After many costly years of going all out on building an entertainment empire (Amazon spent almost $19 billion on its video and music business in 2023), it’s interesting to see that McGrath, who would have been an important advocate and adviser on that strategy, is not going to stand for reelection.

That’s not at all to say that Amazon will cease to be a huge force in streaming entertainment, be it video, music, gaming or anything else. The company is now folding in advertising across Prime Video, which is one big reason it may want to keep its audience happy and coming back.

Still, it will be interesting to see how investments play out in that segment in 2024. The company has laid off hundreds of employees in its studio and video divisions, and it has also been winding down Prime Video in some regions, which may indicate that the business could be smaller, or at least more focused, going forward. And given the AI whiplash that every Big Tech company is currently dealing with, it feels timely that McGrath is stepping away from the board now.

On that note, to stay at the forefront of tech, Amazon will be looking for better thought leadership on the next steps in its artificial intelligence strategy.

It’s worth remembering that Amazon has been a leading player in AI for a long time. Its Alexa assistant and Echo devices helped put voice recognition and connected assistants on the map; the company has been working on autonomous services, for in airborne and ground-level delivery as well as in-store purchasing; it uses machine learning to improve how products are targeted; AWS is a big player in AI compute; and now it is pouring billions into investments in big AI startups.

Yet, for at least a year, in the wake of OpenAI’s GPT advancements, Amazon has grappled with the impression internally and externally that it is “falling behind” on the technology.

Is it true? Is it just optics? Regardless of the answer, Ng’s appointment can only be helpful for advancing Amazon’s profile in the realm of AI. Put simply, the company wants, and believes it needs, to make real innovation in the space. Andy Jassy, in Amazon’s annual letter to shareholders, published shortly after the Ng announcement, went so far as to call GenAI Amazon’s fourth “pillar” (alongside Marketplace, Prime and AWS) in terms of future focus. That requires serious, high-level direction on how to make more than just follow-on moves.

Image Credits: TechCrunch

Ng is potentially a triple-threat board appointment: He has experience in academia, investing, and hands-on building, and he has usually handled all three roles simultaneously. He is currently an adjunct professor at Stanford; a general partner at a venture studio called AI Fund; and he heads edtech company DeepLearning.AI and is the founder of computer vision startup Landing AI. Oh, and he’s also chair of Coursera, another edtech startup he founded and used to lead.

Ng has also served as the chief scientist and VP at Chinese search giant Baidu; and he founded and led Google Brain, which was that search giant’s first big foray into building and applying AI tech across its products.

Amazon did not provide any statement from Ng in its announcement. We have reached out to him directly, and we’ll update when and if we hear back.

It may feel like a new wave of companies and thinkers are setting the pace in AI, but the Amazons of the world are certainly not standing by idly.


Software Development in Sri Lanka

Robotic Automations

Webflow acquires Intellimize to add AI-powered webpage personalization | TechCrunch


Webflow, a web design and hosting platform that’s raised over $330 million at a $4 billion valuation, is expanding into a new sector: marketing optimization.

Today, Webflow announced that it acquired Intellimize, a startup leveraging AI to personalize websites for unique visitors. The terms of the deal weren’t disclosed. But a source familiar with the matter tells TechCrunch that the purchase price was in the “eight-figure” range.

The majority of the Intellimize team — around 50 people — will join Webflow. But some staffers either took outplacement packages or were let go and given severance; Webflow wouldn’t say how many.

Vlad Magdalin, the CEO of Webflow, said Intellimize was a natural fit for Webflow’s first-ever acquisition because its product meets a need many Webflow customers share: personalizing and optimizing their websites.

“The common thread among our many customer segments is that they’re building professional websites that are meant not only to look great, but ultimately to drive business results — and tons of our customers and partners have been asking us to help them improve how well their websites are able to bring them new customers beyond the initial build phase,” Magdalin said. “Intellimize quickly emerged as a really impressive product in this space that many marketing and growth leaders raved about — and it soon became very evident that combining the forces of our respective products and our teams can create a much more powerful combination.”

Guy Yalif, former head of vertical marketing at Twitter, co-founded Intellimize in 2016 with Brian Webb and Jin Lim. While in a previous exec role at Yahoo, Yalif worked with Lim, Yahoo’s VP of engineering at the time, and Webb, who was an architect on Yahoo’s personalized content recommendation team. (Full disclosure: Yahoo is TechCrunch’s corporate parent.)

With Intellimize, Yalif, Webb and Lim — drawing on their combined marketing know-how — set out to build a platform that could generate personalized webpages for visitors on demand.

The motivation? Seventy-four percent of customers feel frustrated when a website’s content isn’t customized, according to stats cited by Porch Group Media. Companies that do personalize report not only increased revenue, but more efficient marketing spend.

Intellimize taps AI to generate pages, automatically making adjustments in response to how users behave (and where they’re coming from). Companies create a website template, then Intellimize’s AI runs experiments, fiddling with various knobs and dials is it were before delivering the top-performing results to visitors.

Now, Intellimize isn’t the only one doing this.

Amazon’s Personalize can drive tailored product and search recommendations on the web. Sstartups such as Evolv AI and Episerver-owned Optimizely automate certain forms of A/B web testing with algorithms. That’s not to mention generative AI-driven platforms like Adobe’s GenStudio, Movable Ink, Mutiny and Blend, which are hastening in new and novel forms of experience personalization.

But Intellimize — whether on the strength of its tech, partnerships or advertising — manage to establish a sizeable foothold in the market for AI-powered marketing.

At the time of the acquisition, Intellimize — which had raised over $50 million from investors like Cobalt Capital, Addition, Amplify Partners and Homebrew — had several tentpole customers including Sumo Logic, Dermalogica and ZoomInfo.

“The Intellimize team had already built most of the personalization and optimization tools that we were considering building in-house, and had an impressive roster of enterprise customers using their solution,” Magdalin said. “Their team and product demonstrated world-class expertise in machine learning and AI to power website personalization and conversion rate optimization, which we believe would be a very powerful addition to Webflow’s existing platform.”

So what changes can Intellimize customers expect as the company joins the Webflow fold? Not many disruptive ones, Yalif stressed. Intellimize will continue to be sold standalone to non-Webflow customers, but it’ll increasingly link to — and integrate with — Webflow services. Yalif, meanwhile, will join Webflow as “head of personalization,” guiding — what else? — personalization product efforts at Webflow.

“Joining Webflow allows us to scale and significantly accelerate our forward momentum,” Yalif said. “Webflow is building out its integrated solution for website building, design and optimization. Intellimize is the foundation of the personalization and optimization pieces of that vision. Together, we can take on larger, much more expensive, harder-to-use players in the digital experience space.”

Here’s Magdalin’s take:

“Integrating Intellimize expands our primary audience beyond designers and developers … For the initial phase [of the merger], we’re focusing on natively integrating both of our products together — so customers should expect the best of Webflow and the best of Intellimize to be available as one unified product experience later this year.”


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber