From Digital Age to Nano Age. WorldWide.

Tag: data

Robotic Automations

Audible to test using Prime Video data for audiobook recommendations as Spotify competition heats up | TechCrunch


Amazon has historically operated audiobook marketplace Audible as a separate entity, unconnected to the retailer’s broader goals and ambitions. Today, that’s changing a bit with the launch of a test that will allow Audible users to receive recommendations about what to listen to next based on their Prime Video viewing behavior. The company says this […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

US Patent and Trademark Office confirms another leak of filers' address data | TechCrunch


The federal government agency responsible for granting patents and trademarks is alerting thousands of filers whose private addresses were exposed following a second data spill in as many years. The U.S. Patent and Trademark Office (USPTO) said in an email to affected trademark applicants this week that their private domicile address — which can include […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Atlan scores $105M for its data control plane, as LLMs boost importance of data | TechCrunch


For the founders of Atlan, a data governance startup, data has always been at the heart of what they do, even before they launched the company. In fact, co-founders Prukalpa Sankar and Varun Banka got their start building India’s national data platform called SocialCops. So it probably shouldn’t come as a surprise that they have developed a tool for controlling and collaborating around data, pulling together the myriad of data sources and bringing some control to the data chaos inside large organizations.

The company was founded in 2020, but the data problem has only grown in importance since then, and today, as companies try to get their data house in order to take advantage of generative AI, it’s even more paramount. Perhaps that’s why the company announced a $105 million investment on a healthy $750 million valuation on Wednesday. The startup seemingly finds itself in the right place at the right time, solving the right problem – and investors are looking to take advantage.

Atlan is built on the premise that there is an inherent complexity in every organization’s data ecosystem. Atlan’s goal is to bring order to it and help people understand that even though there are a variety of tools and data repositories, it’s possible to get a grip on the big picture.

“​​So what we do at Atlan is we scan your entire data ecosystem. We connect to Snowflake and Databricks, your BI tools, your AI LLM models and your source systems like Salesforce, and we create a single source of truth, essentially a map across your API ecosystem,” Sankar told TechCrunch.

The idea is to build a data fabric that helps people understand how data connects across an organization, and to make it easier to collaborate around data, search across it and fix problems, such as a number that won’t update in a BI dashboard, in an automated way.

Sankar says companies spend too much time trying to understand the data they have, while trying to make sure it’s being delivered to dashboards accurately and making it accessible to the employees who need it most.

The company is not getting this kind of money purely based on good timing, though. It grew ARR 7x in the last 2 years, 31x in the last 3 years – keeping in mind the company has only been around for 4 years. The startup also reports that it achieved an 80% win rate in competitive trials in 2023.

While they wouldn’t discuss the number of customers they have, customer wins include Nasdaq, HubSpot, Elastic, Dr. Martens and Porto Seguro, to name a few.

Atlan currently has 275 employees, with plans to expand given the newly found capital. Although Sankar wasn’t ready to commit to a specific number of new employees, she said they are hiring.

Today’s round was led by GIC and Meritech Capital along with existing investors Salesforce Ventures and PeakXV. Prior investors include Insight Partners and Waterbridge Ventures. The company has now raised a total of $206 million.


Software Development in Sri Lanka

Robotic Automations

Brandywine Realty Trust says data stolen in ransomware attack | TechCrunch


U.S. realty trust giant Brandywine Realty Trust has confirmed a cyberattack that resulted in the theft of data from its network.

In a filing with regulators on Tuesday, the Philadelphia-based Brandywine described the cybersecurity incident as unauthorized access and the “deployment of encryption” on its internal corporate IT systems, consistent with a ransomware attack.

Brandywine said the cyberattack caused disruption to the company’s business applications that support its operations and corporate functions, including its financial reporting systems.

The company said it shut down some of its systems and believes it has contained the activity. The company confirmed that hackers took files from its systems, but it was still investigating whether any sensitive or personal information was taken.

Brandywine is one of the largest real estate trusts (REIT) in the United States, with a portfolio of about 70 properties across Austin, Philadelphia, and Washington DC as of its last earnings report in April.

Some of the company’s biggest tenants reportedly include IBM, Spark Therapeutics, and Comcast.

Since the introduction of new rules in December, U.S. publicly traded companies are obliged to disclose to investors cybersecurity events that may have a material impact on the business. As of the filing, Brandywine said it does not believe the incident is “reasonably likely to materially impact” its operations.


Software Development in Sri Lanka

Robotic Automations

Stack Overflow signs deal with OpenAI to supply data to its models | TechCrunch


OpenAI is collaborating with Stack Overflow, the Q&A forum for software developers, to improve its generative AI models’ performance on programming-related tasks.

As a result of the partnership, announced Monday, OpenAI’s models, including models served through its ChatGPT chatbot platform, should get better over time at answering programming-related questions, the two companies say. At the same time, Stack Overflow will benefit from OpenAI’s expertise in developing new generative AI integrations on the Stack Overflow platform.

The first set of features will go live by the end of June.

The tie-up with OpenAI is a remarkable reversal for Stack Overflow, which initially banned responses from ChatGPT on its platform over fears of spammy responses.

Stack Overflow began experimenting with generative AI features last April, promising to craft models that “reward” devs who contribute knowledge to the platform. In July, the company launched a conversational search tool that lets users pose queries and receive answers based on Stack Overflow’s database of over 58 million questions and answers, along with tools for businesses to fine-tune searches on their own documentation and knowledge bases.

Some members of Stack Overflow’s developer community rebelled against the changes, pointing out concerns related to the validity of information generated by AI, information overload and data privacy for individual contributors on the platform.

There was at least some basis for those concerns. An analysis of more than 150 million lines of code committed to project repos over the past several years by GitClear found that generative AI dev tools are resulting in more mistaken code being pushed to codebases. Elsewhere, security researchers have warned that such tools can amplify existing bugs and security issues in software projects.

But despite the apparent flaws, developers are embracing generative AI tools for at least some coding tasks. In a Stack Overflow poll from June 2023, 44% of developers said that they use AI tools in their development process now while 26% plan to soon.

This has precipitated something of an existential crisis for Stack Overflow. Traffic to the platform has reportedly dipped significantly since the release of capable new generative AI models last year — models that in many cases were trained on data from Stack Overflow.

So now, as it cuts costs, Stack Overflow is pursuing licensing agreements with AI providers.

The company’s deal with OpenAI — the financial terms of which weren’t disclosed — comes after Stack Overflow partnered with Google to enrich Google’s Gemini models with Stack Overflow data and work with Google to bring more AI-powered features to its platform. Stack Overflow stressed at the time that the agreement wasn’t exclusive — and indeed, that turned out to be the case.

Prashanth Chandrasekar, CEO of Stack Overflow, previously said that 10% of the platform’s nearly 600 staff was focused on its AI strategy, and has described potential additional revenue from the strategy as key to ensuring Stack Overflow can keep attracting users and maintaining high-quality information.

“Stack Overflow is the world’s largest developer community,” Chandrasekar said in a press release this morning. “Through [our] industry-leading partnership with OpenAI, we strive to redefine the developer experience, fostering efficiency and collaboration through the power of community, best-in-class data, and AI experiences. Our goal with OverflowAPI, and our work to advance the era of socially responsible AI, is to set new standards with vetted, trusted, and accurate data that will be the foundation on which technology solutions are built and delivered to our user.”


Software Development in Sri Lanka

Robotic Automations

Allozymes puts its accelerated enzymatics to work on a data and AI play, raising $15M | TechCrunch


Allozymes’ ingenious method of quickly testing millions of bio-based chemical reactions is proving to be not just a useful service, but the basis of a unique and valuable dataset. And where there’s a dataset, there’s AI — and where there’s AI, there are investors. The company just raised a $15 million Series A to grow its business from a helpful service to a world-class resource.

We first covered the biotech startup in 2021, when it was taking its first steps: “Back then we were less than five people, and at our first lab — a thousand square feet,” recalled CEO and founder Peyman Salehian.

The company has grown to 32 people in the U.S., Europe and Singapore, and has 15 times the lab space, which it has used to accelerate its already exponentially faster enzyme-screening technique.

The company’s core tech hasn’t changed since 2021, and you can read the detailed description of it in our original article. But the upshot is that enzymes, chains of amino acids that perform certain tasks in biological systems, have until now been rather difficult to either find or invent. That’s because of the sheer number of variations: A molecule may be hundreds of acids long, with 20 to choose from for each position, and every permutation potentially a totally different effect. You get into the billions of possibilities very quickly!

Using traditional methods, these variations can be tested at a rate of a few hundred per day in a reasonable lab space, but Allozymes uses a method in which millions of enzymes can be tested per day by packing them in little droplets and passing them through a special microfluidics system. You could think about it like a conveyor belt with a camera above it, scanning each item that zooms by and automatically sorting them into different bins.

Droplets containing enzyme variants are assessed and if necessary redirected in the microfluidic system. Image Credits: Allozymes

These enzymes could be just about anything that’s needed in the biotech and chemical industry: If you need to turn raw materials into certain desirable molecules, or vice versa, or perform numerous other fundamental processes, enzymes are how you do it. Finding a cheap and effective one is seldom easy, and until recently the entire industry was testing about a million possibilities per year — a number Allozymes aims to multiply over a thousandfold, targeting 7 billion variants in 2024.

“[In 2021] we were just building the machines, but now they’re working very well and we are screening up to 20 million enzyme variants per day,” Salehian said.

The process has already attracted customers across a number of industries, some of which Allozymes can’t disclose due to NDAs, but others have been documented in case studies:

  • Phytoene is an enzyme found naturally in tomatoes and ordinarily harvested in tiny quantities from the skins of millions of them. Allozymes found a pathway to make the same chemical in a bioreactor, using 99% less water (and presumably space).
  • Bisabolol is another useful chemical found naturally in the candeia tree, an Amazon-native plant that has been driven to endangered status. Now a bio-identical bisabolol can be produced in any quantity using a bioreactor and the company’s enzymatic pathway.
  • Fibers of plants and fruits like bananas can be turned into a substance called “soluble sweet fiber,” an alternative to other sugars and sweeteners; Allozymes got a million-dollar grant to accelerate this less-than-easy process. Salehian reports that they have made cookies and some bubble tea with the results.

I asked about the possibility of microplastics-degrading enzymes, which have been a target of much research and also figure in Allozymes’ own promotional materials. Salehian said that while it’s possible, at present it isn’t economically feasible under their current business model — basically, a customer would need to come to the company saying, “I want to pay to develop this.” But it’s on their radar, and they may be working in plastics recycling and handling soon.

So far this has all more or less fallen under the company’s original business model, which amounts to enzyme optimization as a service. But the roadmap involves expanding into more from-scratch work, like finding a molecule to match a need rather than improving an existing process.

The enzyme-tailoring service Allozymes has been doing is to be called SingZyme (as in single enzyme), and will continue to be an entry-level option, filling the “we want to do this 100x faster or cheaper” use case. A more expansive service called MultiZyme will take a higher-level approach, discovering or refining multiple enzymes to fulfill a more general “we need a thing that does this.”

The billions of data points they collect as part of these services will remain their IP, however, and will constitute “the biggest enzyme data library in the world,” Salehian said.

CEO Peyman Salehian and CTO Akbar Vahidi, co-founders of Allozymes. Image Credits: Allozymes

“You can give the structure to AlphaFold and it will tell you how it folds, but it can’t tell you what will happen if it binds with another chemical,” Salehian said, and of course that reaction is the only part industry is concerned with. “There’s no machine learning model in the world that can tell you exactly what to do, because the data we have is so little, and so fragmented; we’re talking 300 samples a day for 20 years,” a number Allozymes’ machines can easily surpass in a single day.

Salehian said that they are actively developing a machine learning model based on the data they have, and even tested it on a known outcome.

“We fed the data to the machine learning model, and it came back with a new molecule suggestion that we are already testing,” he said, which is a promising initial validation of the approach.

The idea is hardly unprecedented: We’ve covered numerous companies and research projects that have found machine learning models can be very helpful in sorting through huge datasets, offering extra confidence even if their outcomes can’t be substituted for the real process.

The $15 million A round includes new investors Seventure Partners, NUS Technology Holdings, Thia Ventures and ID Capital, with repeat investment from Xora Innovation, SOSV, Entrepreneur First and Transpose Platform.

Salehian said the company is in great shape and has plenty of time and money to achieve its ambitions — with the exception that it may raise a smaller amount later this year in order to fund an expansion into pharmaceuticals and open a U.S. office.


Software Development in Sri Lanka

Robotic Automations

UnitedHealth data breach should be a wakeup call for the UK and NHS | TechCrunch


The ransomware attack that has engulfed U.S. health insurance giant UnitedHealth Group and its tech subsidiary Change Healthcare is a data privacy nightmare for millions of U.S. patients, with CEO Andrew Witty confirming this week that it may impact as much as one-third of the country.

But it should also serve as a wakeup call for countries everywhere, including the U.K. where UnitedHealth now plies its trade via the recent acquisition of a company that manages data belonging to millions of NHS (National Health Service) patients.

As one of the largest health care companies in the U.S., UnitedHealth is well known domestically, intersecting with every facet of the healthcare industry from insurance and billing and winding all the way through the physician and pharmacy networks — it’s a $500 billion juggernaut, and the 11th largest company globally by revenue. But in the U.K., UnitedHealth is practically unknown, mostly because it’s not had much business across the pond — until six months ago.

After a 16-month regulatory process ending in October, UnitedHealth subsidiary Optum UK, via an affiliate called Bordeaux UK Holdings II Limited, finally took ownership of EMIS Health in a $1.5 billion deal. EMIS Health provides software that connects doctors with patients, allowing them to book appointments, order repeat prescriptions, and more. One of these services is Patient Access, which claims some 17 million registered users who collectively made 1.4 million family doctor appointments through the app last year and ordered north of 19 million repeat prescriptions.

There’s nothing to suggest that U.K. patient data is at risk here — these are different subsidiaries, with different setups, under different jurisdictions. But according to his senate testimony on Wednesday, Witty blamed the hack on the fact that since UnitedHealth acquired Change Healthcare in 2022, it hadn’t updated its systems — and within those systems was a server that didn’t have multi-factor authentication (MFA) enabled.

We know that hackers stole health data using “compromised credentials” to access a Change Healthcare Citrix portal which had been intended for employees to access internal networks remotely. Incredibly, Witty said that the company was still working to understand why MFA wasn’t enabled, two months after the attack. This doesn’t inspire a great deal of confidence for U.K. health care professionals and patients using EMIS Health under the auspices of its new owners.

This isn’t an isolated case.

Separately this week, 25-year-old hacker Aleksanteri Kivimäki was jailed for more than six years for infiltrating a company called Vastaamo in 2020, stealing health care data belonging to thousands of Finnish patients and attempting to extort and blackmail both the company and affected patients.

Whether ransom attacks prove successful or not, they are ultimately lucrative — payments to perpetrators reportedly doubled to more than $1 billion in 2023, a record-breaking year by many accounts. During his testimony, Witty confirmed previous reports that UnitedHealth made a $22 million ransom payment to its hackers.

Health data as valuable commodity

But the biggest takeaway from all this is that personal data — particularly health data — is a huge global commodity, and it should be protected accordingly. However, we keep seeing incredibly poor cybersecurity hygiene, which should be a concern for everyone.

As TechCrunch wrote a couple of months back, it’s getting increasingly difficult to access even the most basic form of healthcare on the state-funded NHS without agreeing to give private companies access to your data — whether that’s a billion-dollar multinational, or a venture-backed startup.

There might be legitimate operational and practical reasons why working with the private sector makes sense, but the reality is such partnerships increase the attack surface that bad actors can target — regardless of whatever obligations, policies and promises a company might have in place.

Many U.K. family doctor surgeries now require patients to use third-party triaging software to make appointments, and unless you peruse the fine print of the privacy policies with a fine-toothed comb, it’s often not clear who the patient is actually doing business with.

Digging into the privacy policy of one triaging service provider called Patchs Health, which says it supports over 10 million patients across the NHS, reveals that it is merely the data “sub-processor” responsible for developing and maintaining the software. The main data processor contracted to deliver the service is actually rivate equity-backed company called Advanced, which was hit by a ransomware attack two years ago, forcing NHS services offline. Similar to the UnitedHealth attack, legitimate credentials were used to access a Citrix server.

You don’t have to squint to see the parallels between what has happened with UnitedHealth, and what could happen in the U.K. with the myriad private companies striking partnerships with the NHS.

Finland also serves as a prescient reminder as the NHS creeps deeper into the private realm. Dubbed one of the country’s biggest ever crimes, the Vastaamo data breach came about after a now-defunct private psychotherapy company was sub-contracted by Finland’s public health care system. Aleksanteri Kivimäki infiltrated an insecure Vastaamo database, and after Vastaamo refused to pay a reported €450,000 Bitcoin ransom, Kivimäki attempted to blackmail thousands of patients, threatening to release intimate therapy notes.

In the investigation that followed, Vastaamo was found to have wholly inadequate security processes in place. Its patient database was exposed to the open internet, including unencrypted sensitive data such as contact information, social security numbers, and therapist notes. The Finnish data protection ombudsman noted that the most likely cause for the breach was an “unprotected MySQL port in the database,” where the root user account wasn’t password protected. This account enabled unbridled database access from any IP address, and the server had no firewall in place.

In the U.K., there have been well-vocalized concerns around how the NHS is opening access to data. The most high-profile partnership came just last year, when Peter Thiel-backed big data analytics company Palantir was awarded massive contracts by NHS England to help it transition to a new Federated Data Platform (FDP) — much to the chagrin of doctors and data privacy advocates across the country.

It all seems somewhat inevitable though. Privacy advocates shout and scream, but big companies with lots of cash keep getting the keys to sensitive data belonging to millions of people. Promises are made, assurances given, processes implemented — then someone forgets to set up basic MFA, or they leave an encryption key under the doormat, and everything blows up.

Rinse and repeat.




Software Development in Sri Lanka

Robotic Automations

Danti's natural language search engine for Earth data soars with $5M in new funding | TechCrunch


Danti, an artificial intelligence company building a superpower search engine for Earth data, has brought on prominent defensive tech investor Shield Capital as it looks to scale its technology for government customers.

Founded by Jesse Kallman in early 2023, Danti has developed a natural language search engine for data that has historically been highly siloed, like satellite imagery, collating it with other commercial and government sources to report back across multiple sources and domains.

For example, an analyst can pose a complex question in simple language, like “What are the latest tank movements in Eastern Ukraine?” and receive in turn straightforward answers collated across data sources.

The idea is to empower a single analyst to do more, Kallman said in a recent interview. While American adversaries are throwing manpower at the problem of analyzing huge amounts of data, Danti aims to help “one analyst do the work of 10 or 15,” he said. It means that a relatively straightforward question — where is a particular ship off the coast of Lagos, Nigeria, for example — can potentially be answered in seconds, rather than hours.

“We’re not replacing the analysts,” Kallman clarified. “We’re helping them do their work way faster, so that they can get to the part that humans are way better at, which is synthesizing and deciding, ‘What do I now do about this information? How do I want to report on it?’ “

Among the startup’s early customers is the U.S. Space Force, which is using Danti’s product to help officers easily search and share data. The use of natural language models in the search engine means an intuitive, straightforward user experience; no doubt this is paramount in high pressure situations where analysts must make complex decisions, but have little time to trawl through reams of satellite or drone data.

Right now, Danti is squarely focused on government, though in the longer term it plans to roll out a version of its product for commercial industry. This version would focus on property records, parcel information, and risk data, to serve markets like electric utilities and insurance, Kallman said. Customers will also be able to connect their own information into Danti’s engine to use its natural language processing to query their own data.

The $5 million round was led by Shield Capital and includes participation from the startup’s existing investors Tech Square Ventures, Humba Ventures and Leo Polovets, Space.VC, and Radius Capital. Kallman said the startup looked deliberately for a defense-focused fund to lead their next round, particularly as the company looks to execute its government go-to-market plan and scale its engineering team.

Since last summer, when the company announced its $2.75 million pre-seed, the team has grown to over twenty people, and Kallman said the engineering team will grow even more with the new injection of funds.


Software Development in Sri Lanka

Robotic Automations

US fines telcos $200M for sharing customer location data without consent | TechCrunch


The U.S. Federal Communications Commission said on Monday that it is fining the four U.S. major wireless carriers around $200 million in total for “illegally” sharing and selling customers’ real-time location data without their consent.

AT&T’s fine is more than $57 million, Verizon’s is almost $47 million, T-Mobile’s is more than $80 million and Sprint’s is more than $12 million, according to the FCC’s announcement.

“Our communications providers have access to some of the most sensitive information about us. These carriers failed to protect the information entrusted to them. Here, we are talking about some of the most sensitive data in their possession: customers’ real-time location information, revealing where they go and who they are,” FCC Chairwoman Jessica Rosenworcel said in the announcement.

The FCC said its investigative arm, the Enforcement Bureau, concluded that the four companies sold access to its customers’ location data to third-party companies, which the FCC called “aggregators,” which in turn resold the location data to other companies. These series of sales and resales effectively created a whole gray market for cell phone subscribers’ historical and real-time location data. Most customers had no idea such a market for their data even existed, let alone consented to the sale of their data.

Cell phone carriers are required by law to “maintain the confidentiality of such customer information and to obtain affirmative, express customer consent before using, disclosing, or allowing access to such information,” the FCC wrote.

The fines come years after investigations by news organizations revealed that the four carriers were sharing this type of data with law enforcement and bounty hunters, among other organizations.

In 2018, The New York Times reported that law enforcement and correction officials across the U.S. used a company called Securus Technologies to track people’s locations. Securus’ solution relied on “a system typically used by marketers and other companies to get location data from major cell phone carriers,” the NYT wrote.

The following year, a Motherboard investigation revealed that bounty hunters could geo-locate any cell phone customer’s location for as little as $300. “These surveillance capabilities are sometimes sold through word-of-mouth networks,” Motherboard’s Joseph Cox, who is now at 404 Media, wrote at the time.

The FCC wrote that despite these public reports, the four carriers failed to put safeguards in place “to ensure that the dozens of location-based service providers with access to their customers’ location information were actually obtaining customer consent,” and kept selling the data.

All four carriers criticized the decision and said they intend to appeal it.

T-Mobile spokesperson Tara Darrow said in a statement that “this industry-wide third-party aggregator location-based services program was discontinued more than five years ago after we took steps to ensure that critical services like roadside assistance, fraud protection and emergency response would not be disrupted.”

Darrow said that T-Mobile, which merged with Sprint in 2020, will appeal the decision.

“We take our responsibility to keep customer data secure very seriously and have always supported the FCC’s commitment to protecting consumers, but this decision is wrong, and the fine is excessive. We intend to challenge it,” the statement read.

AT&T spokesperson Alex Byers also said the company will appeal, and said that the FCC decision “lacks both legal and factual merit.”

“It unfairly holds us responsible for another company’s violation of our contractual requirements to obtain consent, ignores the immediate steps we took to address that company’s failures, and perversely punishes us for supporting life-saving location services like emergency medical alerts and roadside assistance that the FCC itself previously encouraged. We expect to appeal the order after conducting a legal review,” Byers said in a statement sent to TechCrunch.

Verizon spokesperson Rich Young said that the “FCC’s order gets it wrong on both the facts and the law, and we plan to appeal this decision.”

“In this case, when one bad actor gained unauthorized access to information relating to a very small number of customers, we quickly and proactively cut off the fraudster, shut down the program, and worked to ensure this couldn’t happen again,” the statement read. “Keep in mind, the FCC’s order concerns an old program that Verizon shut down more than half a decade ago. That program required affirmative, opt-in customer consent and was intended to support services like roadside assistance and medical alerts.”


Software Development in Sri Lanka

Robotic Automations

Seam wants to make customer data accessible to every business user | TechCrunch


As data access becomes increasingly tied to business success, making data available to all business users, regardless of their data-wrangling skills, has grown in importance. The founders of Seam, an early-stage startup, experienced the need to make data more accessible firsthand when they were at Okta, and decided to launch a company to solve this problem, especially as it relates to customer data.

On Tuesday, they announced a $5 million seed to make their vision a reality, and that they are making the product available to the public for the first time.

“We’re building what we’re calling the AI interface for customer information. And our mission is really to give anyone regardless of their technical ability, what we’re calling business users, the opportunity to use data to answer any questions they have,” company CEO and co-founder Nicholas Scavone told TechCrunch.

The way they are doing that is via a generative AI prompt interface that lets people ask questions about customer data and get answers back without understanding SQL queries. “Our solution to this is really to build kind of an end-to-end system that gives you a simple chat interface where you can have a conversation with your data in natural language, specifically around the sales and marketing systems,” he said.

The problem with current systems is that in order to get information out of your data warehouse — Seam is built on top of Snowflake — it requires you to know SQL, and that creates friction for most business users, requiring them to go to a data analyst to get the information they need to do their jobs. Generative AI systems offer the ability to turn a plain language query into SQL code automatically and return an answer. He says he and his co-founders recognized this capability was a business opportunity.

“Whenever you see friction like this in a business process, you understand that there’s an opportunity, and that discovery happened to time well with this AI technology shift. All of a sudden it was feasible to solve this problem with natural language without knowing SQL,” he said.

They launched the company in March 2023 and spent a year building. It took a long time to build because they were trying to simplify something incredibly complex, something that a whole team at Okta struggled with. “I think that it took that long because there’s an incredible amount of data infrastructure that we had to set up. We had to integrate with more than 20 applications. We had to make sure that pipelines can be created via one click,” he said. And it was a lot of work trying to automate what it took a bunch of highly skilled people to build prior to this.

With the product together, they are looking to scale their market now. “We’re starting to really scale. Our platform is sturdy, it’s production-grade, we’re working with some great enterprises. So that’s how I would kind of look at the future for us,” he said.

The $5 million seed investment was led by Bessemer Venture Partners with participation from Colle Capital, F7 Ventures, Ritual Capital and Umami Capital. The company also received investments from a number of industry angels.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber