From Digital Age to Nano Age. WorldWide.

Tag: Google039s

Robotic Automations

Google's Gradient backs Patlytics to help companies protect their intellectual property | TechCrunch


Patlytics, an AI-powered patent analytics platform, wants to help enterprises, IP professionals and law firms speed up their patent workflows, from discovery, analytics, comparisons and prosecution to litigation. 

The fledgling startup secured $4.5 million in seed funding, oversubscribed and closed in a few days, led by Google’s AI-focused VC arm, Gradient Ventures.

Patlytics was co-founded by CEO Paul Lee, a former venture capitalist at Tribe, and CTO Arthur Jen, a serial entrepreneur who co-founded and served as CTO of the web3 wallet platform Magic. Their shared vision and complementary skills laid the foundation for Patlytics, driven by their firsthand experiences and a deep understanding of the industry’s pain points. 

The co-founders told TechCrunch they witnessed many opportunities in the IP space. Lee, who spent most of his previous career investing in vertical SaaS and AI and a few legal tech startups, came across many IP companies that used antiquated techniques in a workflow that (he thought) should be digitalized. While working at Magic, Jen dealt intensively with filing and defending patents to protect company technology. 

“The AI revolution in patent intelligence is not just about efficiency; it’s about transforming how patent professionals strategize and engage with the entire patent lifecycle,” Lee said in an exclusive interview with TechCrunch. “Recognizing the intricate blend of technical and legal expertise required for patent work, we’ve developed our platform to be an indispensable ally for patent professionals.” 

Traditional patent prosecution and litigation workflows, which rely heavily on manual input, are complex and time-consuming, Lee continued. The research and discovery phase, which involves searching and analyzing large volumes of patent data, demands significant effort, encompassing internet searches, piecemeal manual investigations, and inherently inefficient procedures.”

What sets the startup apart from its industry peers like Anaqua, Clarivate and Patsnap is that Patlytics is “the sole provider offering end drafts and extensive chart solutions” in its current AI-first approach in terms of insights and analytics, Lee explained. 

Another difference is the platform doesn’t rely entirely on software solutions, but has a place for human participation in the process.

Image Credits: Patlytics / Co-founders: Arthur Jen (CTO) and Paul Lee (CEO)

The outfit recently launched its product, which is SOC-2 certified, and already serves some top-tier law firms and a few in-house legal counsels at enterprises as customers. The company did not disclose the number of clients due to confidentiality agreements. Its target users include IP law firms and companies with several patents. 

“Protecting intellectual property remains a major priority and business requirement for information technology, physical product and biotechnology companies. As companies incorporate AI into their new products, companies from the automobile to the pharmaceutical industry are keen to protect new inventions and watch for infringement from competitors,” said Gradient’s general partner, Darian Shirazi. “We’re excited to partner with the team at Patlytics as they leverage the recent transformative innovations in AI to reinvent the intellectual property protection industry.” 

The outfit will use the proceeds to invest in product and AI development and go-to-market function, aiming to cover all relevant workflows for patent prosecution and litigation. In addition, it plans to bolster its engineering team. The company has 11 employees. 

“Knowing that navigating the intricate landscape of intellectual property can be laborious, our AI-integrated patent workflow aims to enhance the efficiency and provide insights, transforming IP protection into a dynamic force shaping the future technological landscape,” Jen said. “We build our technology with data security and privacy in mind, safeguarding sensitive information throughout the patent lifecycle.” 

Other participants in the round included 8VC, Alumni Ventures, Gaingels, Joe Montana’s Liquid 2 Ventures, Position Ventures, Tribe Capital and Vermilion Ventures. Notably, the round also attracted a host of angel backers, including partners at premier law firms, Datadog president Amit Agarwal, Fiscal Note founder Tim Hwang and Tapas Media founder Chang Kim. 


Software Development in Sri Lanka

Robotic Automations

Watch: Google's Gemini Code Assist wants to use AI to help developers


Can AI eat the jobs of the developers who are busy building AI models? The short answer is no, but the longer answer is not yet settled. News this week that Google has a new AI-powered coding tool for developers, straight from the company’s Google Cloud Next 2024 event in Las Vegas, means that competitive pressures between major tech companies to build the best service to help coders write more code, more quickly is still heating up.

Microsoft’s GitHub Copilot service that has similar outlines has been steadily working toward enterprise adoption. Both companies want to eventually build developer-helping tech that can understand a company’s codebase, allowing it to offer up more tailored suggestions and tips.

Startups are in the fight as well, though they tend to focus more tailored solutions than the broader offerings from the largest tech companies; Pythagora, Tusk and Ellipsis from the most recent Y Combinator batch are working on app creation from user prompts, AI agents for bug-squashing and turning GitHub comments into code, respectively.

Everywhere you look, developers are building tools and services to help their own professional cohort.

Developers learning to code today won’t know a world in which they don’t have AI-powered coding helps. Call it the graphic calculator era for software builders. But the risk — or the worry, I suppose — is that in time the AI tools that are ingesting mountains of code to get smarter to help humans do more will eventually be able to do enough that fewer humans are needed to do the work of writing code for companies themselves. And if a company can spend less money and employ fewer people, it will; no job is safe, but some roles are just more difficult to replace at any given moment.

Thankfully, given the complexities of modern software services, ever-present tech debt and an infinite number of edge cases, what big tech and startups are busy building today seem to be very useful coding helps and not something ready to replace or even reduce the number of humans building them. For now. I wouldn’t take the other end of that bet on a multi-decade time frame.

And for those looking for an even deeper dive into what Google revealed this week, you can head here for our complete rundown, including details on exactly how Gemini Code Assist works, and Google’s in-depth developer walkthrough from Cloud Next 2024.


Software Development in Sri Lanka

Robotic Automations

Google's Gemini Pro 1.5 enters public preview on Vertex AI | TechCrunch


Gemini 1.5 Pro, Google’s most capable generative AI model, is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. The company announced the news during its annual Cloud Next conference, which is taking place in Las Vegas this week.

Gemini 1.5 Pro launched in February, joining Google’s Gemini family of generative AI models. Undoubtedly its headlining feature is the amount of context that it can process: between 128,000 tokens to up to 1 million tokens, where “tokens” refers to subdivided bits of raw data (like the syllables “fan,” “tas” and “tic” in the word “fantastic”).

One million tokens is equivalent to around 700,000 words or around 30,000 lines of code. It’s about four times the amount of data that Anthropic’s flagship model, Claude 3, can take as input and about eight times as high as OpenAI’s GPT-4 Turbo max context.

A model’s context, or context window, refers to the initial set of data (e.g. text) the model considers before generating output (e.g. additional text). A simple question — “Who won the 2020 U.S. presidential election?” — can serve as context, as can a movie script, email, essay or e-book.

Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic. This isn’t necessarily so with models with large contexts. And, as an added upside, large-context models can better grasp the narrative flow of data they take in, generate contextually richer responses and reduce the need for fine-tuning and factual grounding — hypothetically, at least.

So what specifically can one do with a 1 million-token context window? Lots of things, Google promises, like analyzing a code library, “reasoning across” lengthy documents and holding long conversations with a chatbot.

Because Gemini 1.5 Pro is multilingual — and multimodal in the sense that it’s able to understand images and videos and, as of Tuesday, audio streams in addition to text — the model can also analyze and compare content in media like TV shows, movies, radio broadcasts, conference call recordings and more across different languages. One million tokens translates to about an hour of video or around 11 hours of audio.

Thanks to its audio-processing capabilities, Gemini 1.5 Pro can generate transcriptions for video clips, as well, although the jury’s out on the quality of those transcriptions.

In a prerecorded demo earlier this year, Google showed Gemini 1.5 Pro searching the transcript of the Apollo 11 moon landing telecast (which comes to about 400 pages) for quotes containing jokes, and then finding a scene in movie footage that looked similar to a pencil sketch.

Google says that early users of Gemini 1.5 Pro — including United Wholesale Mortgage, TBS and Replit — are leveraging the large context window for tasks spanning mortgage underwriting; automating metadata tagging on media archives; and generating, explaining and transforming code.

Gemini 1.5 Pro doesn’t process a million tokens at the snap of a finger. In the aforementioned demos, each search took between 20 seconds and a minute to complete — far longer than the average ChatGPT query.

Google previously said that latency is an area of focus, though, and that it’s working to “optimize” Gemini 1.5 Pro as time goes on.

Of note, Gemini 1.5 Pro is slowly making its way to other parts of Google’s corporate product ecosystem, with the company announcing Tuesday that the model (in private preview) will power new features in Code Assist, Google’s generative AI coding assistance tool. Developers can now perform “large-scale” changes across codebases, Google says, for example updating cross-file dependencies and reviewing large chunks of code.




Software Development in Sri Lanka

Robotic Automations

Google's Gemini comes to databases | TechCrunch


Google wants Gemini, its family of generative AI models, to power your app’s databases — in a sense.

At its annual Cloud Next conference in Las Vegas, Google announced the public preview of Gemini in Databases, a collection of features underpinned by Gemini to — as the company pitched it — “simplify all aspects of the database journey.” In less jargony language, Gemini in Databases is a bundle of AI-powered, developer-focused tools for Google Cloud customers who are creating, monitoring and migrating app databases.

One piece of Gemini in Databases is Database Studio, an editor for structured query language (SQL), the language used to store and process data in relational databases. Built into the Google Cloud console, Database Studio can generate, summarize and fix certain errors with SQL code, Google says, in addition to offering general SQL coding suggestions through a chatbot-like interface.

Joining Database Studio under the Gemini in Databases brand umbrella is AI-assisted migrations via Google’s existing Database Migration Service. Google’s Gemini models can convert database code and deliver explanations of those changes along with recommendations, according to Google.

Elsewhere, in Google’s new Database Center — yet another Gemini in Databases component — users can interact with databases using natural language and can manage a fleet of databases with tools to assess their availability, security and privacy compliance. And should something go wrong, those users can ask a Gemini-powered bot to offer troubleshooting tips.

“Gemini in Databases enables customer to easily generate SQL; additionally, they can now manage, optimize and govern entire fleets of databases from a single pane of glass; and finally, accelerate database migrations with AI-assisted code conversions,” Andi Gutmans, GM of databases at Google Cloud, wrote in a blog post shared with TechCrunch. “Imagine being able to ask questions like ‘Which of my production databases in east Asia had missing backups in the last 24 hours?’ or ‘How many PostgreSQL resources have a version higher than 11?’ and getting instant insights about your entire database fleet.”

That assumes, of course, that the Gemini models don’t make mistakes from time to time — which is no guarantee.

Regardless, Google’s forging ahead, bringing Gemini to Looker, its business intelligence tool, as well.

Launching in private preview, Gemini in Looker lets users “chat with their business data,” as Google describes it in a blog post. Integrated with Workspace, Google’s suite of enterprise productivity tools, Gemini in Looker spans features such as conversational analytics; report, visualization and formula generation; and automated Google Slide presentation generation. 

I’m curious to see if Gemini in Looker’s report and presentation generation work reliably well. Generative AI models don’t exactly have a reputation for accuracy, after all, which could lead to embarrassing, or even mission-critical, mistakes. We’ll find out as Cloud Next continues into the week with any luck.

Gemini in Databases could be perceived as a response of sorts to top rival Microsoft’s recently launched Copilot in Azure SQL Database, which brought generative AI to Microsoft’s existing fully managed cloud database service. Microsoft is looking to stay a step ahead in the budding AI-driven database race and has also worked to build generative AI with Azure Data Studio, the company’s set of enterprise data management and development tools.


Software Development in Sri Lanka

Robotic Automations

India scrambles to curb PhonePe and Google's dominance in mobile payments | TechCrunch


The National Payments Corporation of India (NPCI), the governing body overseeing the country’s widely used Unified Payments Interface (UPI) mobile payment system, is set to engage with various fintech startups this month to develop a strategy to address the growing market dominance of PhonePe and Google Pay in the UPI ecosystem.

NPCI executives plan to meet with representatives from CRED, Flipkart, Fampay and Amazon among other players to discuss their key initiatives aimed at boosting UPI transactions on their respective apps and to understand the assistance they require, people familiar with the matter told TechCrunch.

UPI, built by a coalition of Indian banks, has become the most popular way Indians transact online, processing over 10 billion transactions monthly.

The new meetings are part of an increasing effort to address concerns raised by lawmakers and industry players regarding the market share concentration of Google Pay and PhonePe, which together account for nearly 86% of UPI transactions by volume, up from 82.5% at the end of December. Walmart owns more than three-fourths of PhonePe.

Paytm, the third-largest UPI player, has seen its market share decline to 9.1% by the end of March, down from 13% at the end of 2023, following a clampdown by the Reserve Bank of India (RBI).

An overview of India’s UPI ecosystem. (Image: Macquarie)

The conversation follows the central bank expressing “displeasure” to the NPCI over the growing duopoly in the payments space, a person familiar with the matter said. An NPCI spokesperson declined to comment.

In February, a parliamentary panel in India urged the government to support the growth of domestic fintech players that can offer alternatives to the Walmart-backed PhonePe and Google Pay apps.

The NPCI has long advocated for limiting the market share of individual companies participating in the UPI ecosystem to 30%. However, it has extended the deadline for firms to comply with this directive to the end of December 2024. The organization faces a unique challenge in enforcing this directive: It believes that it currently lacks a technical mechanism to do so, TechCrunch previously reported.

The RBI is also weighing an incentive plan to create a more favorable competitive field for emerging UPI players, another person familiar with the matter said. Indian daily Economic Times separately reported Wednesday that the NPCI is encouraging fintech companies to offer incentives to their users, promoting the use of their respective apps for making UPI transactions.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber