From Digital Age to Nano Age. WorldWide.

Tag: model

Robotic Automations

Vista Equity to take revenue optimization platform Model N private in $1.25B deal | TechCrunch


Model N, a platform used by companies such as Johnson & Johnson, AstraZeneca, and AMD to automate decisions related to pricing, incentives, and compliance, is going private in a $1.25 billion deal with private equity firm Vista Equity Partners. The acquisition underscores how PE firms continue to scoop up tech companies that have struggled to perform well in public markets in the last couple of years.

Vista Equity is doling out $30 per share in the all-cash transaction, representing a 12% premium on Friday’s closing price, and 16% on its 30-day average.

This is Vista Equity’s fifth such acquisition in the past 18 months, following Avalara ($8.4 billion); KnowBe4 ($4.6 billion); Duck Creek Technologies ($2.6 billion); and EngageSmart ($4 billion).

Founded in 1999, Model N’s software integrates with various data sources and internal systems to help companies analyze trends, pricing efficacy, market demand, and more. The platform is typically used in industries such as pharmaceuticals and life sciences, where there may be complex pricing structures, and where regulatory or market changes can impact business.

The San Mateo-headquartered company went public on the New York Stock Exchange (NYSE) in 2013, and it has generally performed well in the intervening years — particularly since around 2019, when its market cap steadily started to increase, hitting an all-time high of $1.6 billion last year. However, its valuation has generally hovered below the $1 billion market for the past six months, sparking Vista Equity Partners into action today.

Vista said that it expects the transaction to close in the middle of 2024, though it is of course subject to the usual conditions, including shareholder approval.


Software Development in Sri Lanka

Robotic Automations

Tesla slashes Model Y inventory prices by as much as $7,000 | TechCrunch


Tesla is dropping prices of unsold Model Y SUVs in the U.S. by thousands of dollars in an attempt to clear out an unprecedented backlog of inventory.

Many long-range and performance Model Ys are now selling for $5,000 less than their original price, while rear-wheel drive versions are seeing even bigger cuts of more than $7,000.

The discounts come as Tesla once again made far more vehicles than it sold in the last quarter. The company built 433,371 vehicles in the first quarter but only shipped 386,810, likely adding more than 40,000 EVs to its inventory glut. (Some of those vehicles were likely in transit, though Tesla didn’t say how many.) The company has built more cars than it shipped in seven of the last eight quarters, Bloomberg News noted Friday.

In January, Tesla warned sales growth could be “notably lower” in 2024 compared to previous years — a trend that has bothered every player in the market from big automakers like Ford to struggling upstarts like Lucid.

Tesla went through a typical end-of-quarter push to deliver as many cars as it could over the last few weeks, with lead designer Franz von Holzhausen once again pitching in to get them out the door in the final days. But Tesla also tried to boost sales in other ways. It announced a $1,000 price hike was coming to the Model Y, its most popular vehicle, on April 1. Tesla CEO Elon Musk also started mandating demos of the company’s advanced driver assistance system to all potential buyers. That software package costs $12,000 and can be a huge boost to the profit Tesla makes on a vehicle.

Musk has more or less admitted that Tesla has had to work harder to drum up demand for its vehicles lately. He has largely blamed the struggle on high interest rates, all while his company dramatically cut prices on the Model Y and Model 3 throughout 2023.




Software Development in Sri Lanka

Robotic Automations

OpenAI announces Tokyo office and GPT-4 model optimized for the Japanese language | TechCrunch


OpenAI is expanding to Japan, announcing today a new Tokyo hub and plans for a GPT-4 model optimized specifically for the Japanese language.

The ChatGPT-maker opened its first international office in London last year, followed by its inaugural European Union (EU) office in Dublin a few months later. Tokyo will represent OpenAI’s first office in Asia and fourth globally (including its San Francisco HQ), with CEO Sam Altman highlighting Japan’s “rich history of people and technology coming together to do more” among its reasons for setting up a formal presence in the region.

OpenAI’s global expansion efforts so far have been strategic, insofar as the U.K. is a major hub for AI talent while the EU is currently driving the AI regulatory agenda. Japan, meanwhile, is also positioned prominently in the AI revolution, most recently as the G7 chair and President of the G7’s Hiroshima AI Process, an initiative centered around AI governance and pushing for safe and trustworthy AI.

Its choice on who will head up its new Japanese business is also notable. OpenAI Japan will be led by Tadao Nagasaki, who joins the company from Amazon Web Services (AWS), where he headed up Amazon’s cloud computing subsidiary in the region for the past 12 years — so it’s clear that OpenAI is really targeting the enterprise segment with this latest expansion.

Enterprising

As President of OpenAI Japan, Nagasaki will be tasked with building a local team on the ground and double down on OpenAI’s recent growth in Japan which has seen it secure customers including Daikin, Rakuten, and Toyota which are using ChatGPT’s enterprise-focused incarnation which sports additional privacy, data analysis, and customization options on top of the standard consumer-grade ChatGPT.

OpenAI says ChatGPT is also already being used by local governments to “improve the efficiency of public services in Japan.”

GPT-4 customized for Japanese Image Credits: OpenAI

While ChatGPT has long been conversant in multiple languages, including Japanese, optimizing the latest version of the underlying GPT large language model (LLM) for Japanese will give it enhanced understanding of the nuances within the Japanese language, including cultural comprehension which should make it more effective particularly in business settings such as customer service and content creation. OpenAI also says that the custom model comes with improved performance, which means it should perform faster and be more cost effective than its predecessor.

For now, OpenAI is giving early access to the GPT-4 custom model to some local businesses, with access gradually opened up via the OpenAI API “in the coming months.”


Software Development in Sri Lanka

Robotic Automations

OctoAI wants to make private AI model deployments easier with OctoStack | TechCrunch


OctoAI (formerly known as OctoML), announced the launch of OctoStack, its new end-to-end solution for deploying generative AI models in a company’s private cloud, be that on-premises or in a virtual private cloud from one of the major vendors, including AWS, Google, Microsoft and Azure, as well as CoreWeave, Lambda Labs, Snowflake and others.

In its early days, OctoAI focused almost exclusively on optimizing models to run more effectively. Based on the Apache TVM machine learning compiler framework, the company then launched its TVM-as-a-Service platform and, over time, expanded that into a fully fledged model-serving offering that combined its optimization chops with a DevOps platform. With the rise of generative AI, the team then launched the fully managed OctoAI platform to help its users serve and fine-tune existing models. OctoStack, at its core, is that OctoAI platform, but for private deployments.

Image Credits: OctoAI

OctoAI CEO and co-founder Luis Ceze told me the company has over 25,000 developers on the platform and hundreds of paying customers who use it in production. A lot of these companies, Ceze said, are GenAI-native companies. The market of traditional enterprises wanting to adopt generative AI is significantly larger, though, so it’s maybe no surprise that OctoAI is now going after them as well with OctoStack.

“One thing that became clear is that, as the enterprise market is going from experimentation last year to deployments, one, all of them are looking around because they’re nervous about sending data over an API,” Ceze said. “Two: a lot of them have also committed their own compute, so why am I going to buy an API when I already have my own compute? And three, no matter what certifications you get and how big of a name you have, they feel like their AI is precious like their data and they don’t want to send it over. So there’s this really clear need in the enterprise to have the deployment under your control.”

Ceze noted that the team had been building out the architecture to offer both its SaaS and hosted platform for a while now. And while the SaaS platform is optimized for Nvidia hardware, OctoStack can support a far wider range of hardware, including AMD GPUs and AWS’s Inferentia accelerator, which in turn makes the optimization challenge quite a bit harder (while also playing to OctoAI’s strengths).

Deploying OctoStack should be straightforward for most enterprises, as OctoAI delivers the platform with read-to-go containers and their associated Helm charts for deployments. For developers, the API remains the same, no matter whether they are targeting the SaaS product or OctoAI in their private cloud.

The canonical enterprise use case remains using text summarization and RAG to allow users to chat with their internal documents, but some companies are also fine-tuning these models on their internal code bases to run their own code generation models (similar to what GitHub now offers to Copilot Enterprise users).

For many enterprises, being able to do that in a secure environment that is strictly under their control is what now enables them to put these technologies into production for their employees and customers.

“For our performance- and security-sensitive use case, it is imperative that the models which process calls data run in an environment that offers flexibility, scale and security,” said Dali Kaafar, founder and CEO at Apate AI. “OctoStack lets us easily and efficiently run the customized models we need, within environments that we choose, and deliver the scale our customers require.”


Software Development in Sri Lanka

Robotic Automations

OpenAI expands its custom model training program | TechCrunch


OpenAI is expanding a program, Custom Model, to help enterprise customers develop tailored generative AI models using its technology for specific use cases, domains and applications.

Custom Model launched last year at OpenAI’s inaugural developer conference, DevDay, offering companies an opportunity to work with a group of dedicated OpenAI researchers to train and optimize models for specific domains. “Dozens” of customers have enrolled in Custom Model since. But OpenAI says that, in working with this initial crop of users, it’s come to realize the need to grow the program to further “maximize performance.”

Hence assisted fine-tuning and custom-trained models.

Assisted fine-tuning, a new component of the Custom Model program, leverages techniques beyond fine-tuning — such as “additional hyperparameters and various parameter efficient fine-tuning methods at a larger scale,” in OpenAI’s words — to enable organizations to set up data training pipelines, evaluation systems and other supporting infrastructure toward bolstering model performance on particular tasks.

As for custom-trained models, they’re custom models built with OpenAI — using OpenAI’s base models and tools (e.g. GPT-4) — for customers that “need to more deeply fine-tune their models” or “imbue new, domain-specific knowledge,” OpenAI says.

OpenAI gives the example of SK Telecom, the Korean telecommunications giant, who worked with OpenAI to fine-tune GPT-4 to improve its performance in “telecom-related conversations” in Korean. Another customer, Harvey — which is building AI-powered legal tools with support from the OpenAI Startup Fund, OpenAI’s AI-focused venture arm — teamed up with OpenAI to create a custom model for case law that incorporated hundreds of millions of words of legal text and feedback from licensed expert attorneys.

“We believe that in the future, the vast majority of organizations will develop customized models that are personalized to their industry, business, or use case,” OpenAI writes in a blog post. “With a variety of techniques available to build a custom model, organizations of all sizes can develop personalized models to realize more meaningful, specific impact from their AI implementations.”

Image Credits: OpenAI

OpenAI is flying high, reportedly nearing an astounding $2 billion in annualized revenue. But there’s surely internal pressure to maintain pace, particularly as the company plots a $100 billion data center co-developed with Microsoft (if reports are to be believed). The cost of training and serving flagship generative AI models isn’t coming down anytime soon after all, and consulting work like custom model training might just be the thing to keep revenue growing while OpenAI plots its next moves.

Fine-tuned and custom models could also lessen the strain on OpenAI’s model serving infrastructure. Tailored models are in many cases smaller and more performant than their general-purpose counterparts, and — as the demand for generative AI reaches a fever pitch — no doubt present an attractive solution for a historically compute-capacity-challenged OpenAI.

Alongside the expanded Custom Model program and custom model building, OpenAI today unveiled new model fine-tuning features for developers working with GPT-3.5, including a new dashboard for comparing model quality and performance, support for integrations with third-party platforms (starting with the AI developer platform Weights & Biases) and enhancements to tooling. Mum’s the word on fine-tuning for GPT-4, however, which launched in early access during DevDay.


Software Development in Sri Lanka

Robotic Automations

Databricks spent $10M on new DBRX generative AI model | TechCrunch


If you wanted to raise the profile of your major tech company and had $10 million to spend, how would you spend it? On a Super Bowl ad? An F1 sponsorship?

You could spend it training a generative AI model. While not marketing in the traditional sense, generative models are attention grabbers — and increasingly funnels to vendors’ bread-and-butter products and services.

See Databricks’ DBRX, a new generative AI model announced today akin to OpenAI’s GPT series and Google’s Gemini. Available on GitHub and the AI dev platform Hugging Face for research as well as for commercial use, base (DBRX Base) and fine-tuned (DBRX Instruct) versions of DBRX can be run and tuned on public, custom or otherwise proprietary data.

“DBRX was trained to be useful and provide information on a wide variety of topics,” Naveen Rao, VP of generative AI at Databricks, told TechCrunch in an interview. “DBRX has been optimized and tuned for English language usage, but is capable of conversing and translating into a wide variety of languages, such as French, Spanish and German.”

Databricks describes DBRX as “open source” in a similar vein as “open source” models like Meta’s Llama 2 and AI startup Mistral’s models. (It’s the subject of robust debate as to whether these models truly meet the definition of open source.)

Databricks says that it spent roughly $10 million and two months training DBRX, which it claims (quoting from a press release) “outperform[s] all existing open source models on standard benchmarks.”

But — and here’s the marketing rub — it’s exceptionally hard to use DBRX unless you’re a Databricks customer.

That’s because, in order to run DBRX in the standard configuration, you need a server or PC with at least four Nvidia H100 GPUs (or any other configuration of GPUs that add up to around 320GB of memory). A single H100 costs thousands of dollars — quite possibly more. That might be chump change to the average enterprise, but for many developers and solopreneurs, it’s well beyond reach.

It’s possible to run the model on a third-party cloud, but the hardware requirements are still pretty steep — for example, there’s only one instance type on the Google Cloud that incorporates H100 chips. Other clouds may cost less, but generally speaking running huge models like this is not cheap today.

And there’s fine print to boot. Databricks says that companies with more than 700 million active users will face “certain restrictions” comparable to Meta’s for Llama 2, and that all users will have to agree to terms ensuring that they use DBRX “responsibly.” (Databricks hadn’t volunteered those terms’ specifics as of publication time.)

Databricks presents its Mosaic AI Foundation Model product as the managed solution to these roadblocks, which in addition to running DBRX and other models provides a training stack for fine-tuning DBRX on custom data. Customers can privately host DBRX using Databricks’ Model Serving offering, Rao suggested, or they can work with Databricks to deploy DBRX on the hardware of their choosing.

Rao added:

“We’re focused on making the Databricks platform the best choice for customized model building, so ultimately the benefit to Databricks is more users on our platform. DBRX is a demonstration of our best-in-class pre-training and tuning platform, which customers can use to build their own models from scratch. It’s an easy way for customers to get started with the Databricks Mosaic AI generative AI tools. And DBRX is highly capable out-of-the-box and can be tuned for excellent performance on specific tasks at better economics than large, closed models.”

Databricks claims DBRX runs up to 2x faster than Llama 2, in part thanks to its mixture of experts (MoE) architecture. MoE — which DBRX shares in common with Mistral’s newer models and Google’s recently announced Gemini 1.5 Pro — basically breaks down data processing tasks into multiple subtasks and then delegates these subtasks to smaller, specialized “expert” models.

Most MoE models have eight experts. DBRX has 16, which Databricks says improves quality.

Quality is relative, however.

While Databricks claims that DBRX outperforms Llama 2 and Mistral’s models on certain language understanding, programming, math and logic benchmarks, DBRX falls short of arguably the leading generative AI model, OpenAI’s GPT-4, in most areas outside of niche use cases like database programming language generation.

Now, as some on social media have pointed out, DBRX and GPT-4, which cost significantly more to train, are very different — perhaps too different to warrant a direct comparison. It’s important that these large, enterprise-funded models get compared to the best of the field, but what distinguishes them should also be pointed out, like the fact that DBRX is “open source” and targeted at a distinctly enterprise audience.

At the same time, it can’t be ignored that DBRX is somewhat close to flagship models like GPT-4 in that it’s cost-prohibitive for the average person to run, its training data isn’t open and it isn’t open source in the strictest definition.

Rao admits that DBRX has other limitations as well, namely that it — like all other generative AI models — can fall victim to “hallucinating” answers to queries despite Databricks’ work in safety testing and red teaming. Because the model was simply trained to associate words or phrases with certain concepts, if those associations aren’t totally accurate, its responses won’t always be accurate.

Also, DBRX is not multimodal, unlike some more recent flagship generative AI models, including Gemini. (It can only process and generate text, not images.) And we don’t know exactly what sources of data were used to train it; Rao would only reveal that no Databricks customer data was used in training DBRX.

“We trained DBRX on a large set of data from a diverse range of sources,” he added. “We used open data sets that the community knows, loves and uses every day.”

I asked Rao if any of the DBRX training data sets were copyrighted or licensed, or show obvious signs of biases (e.g. racial biases), but he didn’t answer directly, saying only, “We’ve been careful about the data used, and conducted red teaming exercises to improve the model’s weaknesses.” Generative AI models have a tendency to regurgitate training data, a major concern for commercial users of models trained on unlicensed, copyrighted or very clearly biased data. In the worst-case scenario, a user could end up on the ethical and legal hooks for unwittingly incorporating IP-infringing or biased work from a model into their projects.

Some companies training and releasing generative AI models offer policies covering the legal fees arising from possible infringement. Databricks doesn’t at present — Rao says that the company’s “exploring scenarios” under which it might.

Given this and the other aspects in which DBRX misses the mark, the model seems like a tough sell to anyone but current or would-be Databricks customers. Databricks’ rivals in generative AI, including OpenAI, offer equally if not more compelling technologies at very competitive pricing. And plenty of generative AI models come closer to the commonly understood definition of open source than DBRX.

Rao promises that Databricks will continue to refine DBRX and release new versions as the company’s Mosaic Labs R&D team — the team behind DBRX — investigates new generative AI avenues.

“DBRX is pushing the open source model space forward and challenging future models to be built even more efficiently,” he said. “We’ll be releasing variants as we apply techniques to improve output quality in terms of reliability, safety and bias … We see the open model as a platform on which our customers can build custom capabilities with our tools.”

Judging by where DBRX now stands relative to its peers, it’s an exceptionally long road ahead.

This story was corrected to note that the model took two months to train, and removed an incorrect reference to Llama 2 in the fourteenth paragraph. We regret the errors.


Software Development in Sri Lanka

Robotic Automations

AI21 Labs' new AI model can handle more context than most | TechCrunch


Increasingly, the AI industry is moving toward generative AI models with longer contexts. But models with large context windows tend to be compute-intensive. Or Dagan, product lead at AI startup AI21 Labs, asserts that this doesn’t have to be the case — and his company is releasing a generative model to prove it.

Contexts, or context windows, refer to input data (e.g. text) that a model considers before generating output (more text). Models with small context windows tend to forget the content of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.

AI21 Labs’ Jamba, a new text-generating and -analyzing model, can perform many of the same tasks that models like OpenAI’s ChatGPT and Google’s Gemini can. Trained on a mix of public and proprietary data, Jamba can write text in English, French, Spanish and Portuguese.

Jamba can handle up to 140,000 tokens while running on a single GPU with at least 80GB of memory (like a high-end Nvidia A100). That translates to around 105,000 words, or 210 pages — a decent-sized novel.

Meta’s Llama 2, by comparison, has a 32,000-token context window — on the smaller side by today’s standards — but only requires a GPU with ~12GB of memory in order to run. (Context windows are typically measured in tokens, which are bits of raw text and other data.)

On its face, Jamba is unremarkable. Loads of freely available, downloadable generative AI models exist, from Databricks’ recently released DBRX to the aforementioned Llama 2.

But what makes Jamba unique is what’s under the hood. It uses a combination of two model architectures: transformers and state space models (SSMs).

Transformers are the architecture of choice for complex reasoning tasks, powering models like GPT-4 and Google’s Gemini, for example. They have several unique characteristics, but by far transformers’ defining feature is their “attention mechanism.” For every piece of input data (e.g. a sentence), transformers weigh the relevance of every other input (other sentences) and draw from them to generate the output (a new sentence).

SSMs, on the other hand, combine several qualities of older types of AI models, such as recurrent neural networks and convolutional neural networks, to create a more computationally efficient architecture capable of handling long sequences of data.

Now, SSMs have their limitations. But some of the early incarnations, including an open source model called Mamba from Princeton and Carnegie Mellon researchers, can handle larger inputs than their transformer-based equivalents while outperforming them on language generation tasks.

Jamba in fact uses Mamba as part of the core model — and Dagan claims it delivers three times the throughput on long contexts compared to transformer-based models of comparable sizes.

“While there are a few initial academic examples of SSM models, this is the first commercial-grade, production-scale model,” Dagan said in an interview with TechCrunch. “This architecture, in addition to being innovative and interesting for further research by the community, opens up great efficiency and throughput possibilities.”

Now, while Jamba has been released under the Apache 2.0 license, an open source license with relatively few usage restrictions, Dagan stresses that it’s a research release not intended to be used commercially. The model doesn’t have safeguards to prevent it from generating toxic text or mitigations to address potential bias; a fine-tuned, ostensibly “safer” version will be made available in the coming weeks.

But Dagan asserts that Jamba demonstrates the promise of the SSM architecture even at this early stage.

“The added value of this model, both because of its size and its innovative architecture, is that it can be easily fitted onto a single GPU,” he said. “We believe performance will further improve as Mamba gets additional tweaks.”


Software Development in Sri Lanka

Robotic Automations

X's Grok chatbot will soon get an upgraded model, Grok-1.5 | TechCrunch


Elon Musk’s AI startup, X.ai, has revealed its latest generative AI model, Grok-1.5. Set to power social network X’s Grok chatbot in the not-so-distant future (“in the coming days,” per a blog post), Grok-1.5 appears to be a measurable upgrade over its predecessor, Grok-1 — at least judging by the published benchmark results and specs.

Grok-1.5 benefits from “improved reasoning,” according to X.ai, particularly where it concerns coding and math-related tasks. The model more than doubled Grok-1’s score on a popular mathematics benchmark, MATH, and scored over 10 percentage points higher on the HumanEval test of programming language generation and problem-solving abilities.

It’s difficult to predict how those results will translate in actual usage. As we recently wrote, commonly-used AI benchmarks, which measure things as esoteric as performance on graduate-level chemistry exam questions, do a poor job of capturing how the average person interacts with models today.

One improvement that should lead to observable gains is the amount of context Grok-1.5 can understand compared to Grok-1.

Grok-1.5 can process contexts of up to 128,000 tokens. Here, “tokens” refers to bits of raw text (e.g., the word “fantastic” split into “fan,” “tas” and “tic”). Context, or context window, refers to input data (in this case, text) that a model considers before generating output (more text). Models with small context windows tend to forget the contents of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.

“[Grok-1.5 can] utilize information from substantially longer documents,” X.ai writes in the blog post. “Furthermore, the model can handle longer and more complex prompts while still maintaining its instruction-following capability as its context window expands.”

What’s historically set X.ai’s Grok models apart from other generative AI models is that they respond to questions about topics that are typically off-limits to other models, like conspiracies and more controversial political ideas. The models also answer questions with “a rebellious streak,” as Musk has described it, and outright rude language if requested to do so.

It’s unclear what changes, if any, Grok-1.5 brings in these areas. X.ai doesn’t allude to this in the blog post.

Grok-1.5 will soon be available to early testers on X, accompanied by “several new features.” Musk has previously hinted at summarizing threads and replies, and suggesting content for posts; we’ll see if those arrive soon enough.

The announcement comes after X.ai open sourced Grok-1, albeit without the code necessary to fine-tune or further train it. More recently, Musk said that more users on X — specifically those paying for X’s $8-per-month Premium plan — would gain access to the Grok chatbot, which was previously only available to X Premium+ customers (who pay $16 per month).


Software Development in Sri Lanka

Robotic Automations

Building a viable pricing model for generative AI features could be challenging | TechCrunch


In October, Box unveiled a new pricing approach for the company’s generative AI features. Instead of a flat rate, the company designed a unique consumption-based model.

Each user gets 20 credits per month, good for any number of AI tasks that add up to 20 events, with each task charged a single credit. After that, people can dip into a company pool of 2,000 additional credits. If the customer surpasses that, it would be time to have a conversation with a salesperson about buying additional credits.

Box CEO Aaron Levie explained that this approach provides a way to charge based on usage with the understanding that some users would take advantage of the AI features more than others, while also accounting for the cost of using the OpenAI API, which the company is using for its underlying large language model.

Meanwhile, Microsoft has chosen a more traditional pricing model, announcing in November that it would charge $30 per user per month to use its Copilot features, over and above the cost of a normal monthly Office 365 subscription, which varies by customer.

While it became clear throughout last year that enterprise software companies would be building generative AI features, at a panel on generative AI’s impact on SaaS companies at Web Summit in November, Christine Spang, co-founder and CTO at Nylas, a communications API startup, and Manny Medina, CEO at sales enablement platform Outreach, spoke about the challenges that SaaS companies face as they implement these features.

Spang says, for starters, that in spite of the hype, generative AI is clearly a big leap forward, and software companies need to look for ways to incorporate it into their products. “I’m not going to say it’s like 10 out of 10 where the hype meets the [current] reality, but I do think there is real value there and what’s really going to make the difference is how people take the technology and connect it to other systems, other apps and sort of drive real value in different use cases with it,” she said.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber