From Digital Age to Nano Age. WorldWide.

Tag: Music

Robotic Automations

Rock band's hidden hacking-themed website gets hacked | TechCrunch


On Friday, Pal Kovacs was listening to the long-awaited new album from rock and metal giants Bring Me The Horizon when he noticed a strange sound at the end of the record’s last track.  Being a fan of solving riddles and breaking encrypted codes, Kovacs wondered: does this sound contain a hidden message?  His hunch […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

EXCLUSIVE: Spotify experiments with an AI DJ that speaks Spanish


Spotify’s addition of its AI DJ feature, which introduces personalized song selections to users, was the company’s first step into an AI future. Now, Spotify is developing an alternative version of that DJ that will speak Spanish. References to the new AI DJ were spotted in the app’s code by tech veteran and reverse engineer […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Sony Music warns tech companies over 'unauthorized' use of its content to train AI | TechCrunch


Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission. The letter, which was obtained by TechCrunch, says Sony Music has “reason to believe” that the recipients of the letter have “may already have made […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Amazon Music follows Spotify with an AI playlist generator of its own, Maestro | TechCrunch


Spotify isn’t the only company to dabble with using AI to generate playlists — on Tuesday, Amazon said it would do the same. Amazon Music is now testing Maestro, an AI playlist generator, allowing U.S. customers on both iOS and Android to create playlists using spoken or written prompts — which can even contain emojis.

Amazon suggests that in addition to emojis, users can write prompts that include activities, sounds or emotions. They can also choose from prompt suggestions at the bottom of the screen if they don’t know what to write. Seconds later, an AI-generated playlist will appear with songs that will — in theory — match your input.

The product is launching in beta, so Amazon warns that the technology behind Maestro “won’t always get it right the first time.” Like Spotify, Amazon has also added some guardrails to the experience to proactively block offensive language and other inappropriate prompts, it says. (We’re guessing people will try to break through those barriers in time!)

Image Credits: Amazon

Maestro is not yet widely available. While Spotify’s AI generator is starting its tests in the U.K. and Australia, Amazon’s product is launching to a “subset” of free Amazon Music users, as well as Prime customers and Unlimited Amazon Music subscribers, on iOS and Android in the U.S. for the time being.

Subscribers will gain access to more functionality, however. For instance, they’ll be able to listen to playlists instantly and save them for later, but Prime members and ad-supported users will only be able to listen to 30-second previews of the songs before saving them. This could potentially push more users to upgrade to the paid subscription if they like the AI functionality. The move also follows the general trend of making premium AI experiences a paid offering.

Image Credits: Amazon

To access Maestro, users will need the latest version of the Amazon Music mobile app. They will have to tap on the option for Maestro on their home screen. They may also see the option to use Maestro when they tap on the plus sign to create a new playlist. From there, users can either talk or write out their playlist prompt idea, then tap “Let’s go!” to start streaming it. The playlist can also be saved and shared with friends.

Amazon suggests prompts like “😭 and eating 🍝”; “Make my 👶 a genius”; “Myspace era hip-hop”; “🏜️🌵🤠;” “Music my grandparents made out to”; “🎤🚿🧼”; and “I tracked my friends and they’re all hanging out without me” to give you an idea of how silly the prompts can be for this new experience.

The company didn’t say when the beta would roll out more broadly, only that it would expand to more customers over time.


Software Development in Sri Lanka

Robotic Automations

Exclusive: Indaband's new app lets you create music with people around the world


A new social media app called Indaband lets musicians and vocalists collaborate with others and make music with people all over the world. The app is designed to make people who usually play an instrument on their own feel like they’re part of a worldwide band (get it, Indaband?). Record a video of yourself playing an instrument and others can stitch in videos of themselves playing their own instruments on top of your original recording.

All you need to get started on Indaband is a pair of headphones and a smartphone to record yourself. You can choose to upload prerecorded files as new tracks or open the app’s recording booth to record your tracks on top of someone else’s. You can record and mix unlimited video tracks in different sessions using the app’s multitrack video studio and share them with your community. Indaband notifies you when someone collaborates with one of your tracks, so you can see how they added their own take on your content.

The app is the brainchild of CEO Daniel Murta, CTO Andrews Medina, Head of Engineering Helielson Santos and Design Leader Emerson Farias. The co-founders came up with the idea for the app when they were working at a legal technology company called Jusbrasil, which Murta co-founded.

Image Credits: Indaband

They all used to get together to play music during happy hours after work, and once the pandemic hit, they came up with the idea for Indaband so they could still play music together while in quarantine. The group then spent their weekends working on Indaband and eventually ended up leaving Jusbrasil to focus on Indaband full-time.

“Music creation is very hard and involves complex software. So, the whole idea was to redesign this process from scratch and make it simple and out of your smartphone,” Murta told TechCrunch. “The idea was that we would unlock musical expression to a different level to make it simple to collaborate and co-create music.”

Indaband helps users discover songs and jam sessions with daily curated playlists that dive into different genres, like rock, jazz, hip-hop and EDM. Users can like and comment on videos and repost them to their followers.

Indaband plans on launching a new feature called “Circles” that Murta compares to clubs on Strava. Circles will allow users to build their own communities on the app and possibly even hold live events. Indaband may also develop a Patreon-like feature within Circles that would allow established creators to offer paid content. For instance, an established musician could offer virtual lessons on an instrument that they have mastered.

Image Credits: Indaband

While Indaband’s early adopters are skilled musicians who are comfortable sharing their music and recording themselves, Indaband eventually plans to target musicians and singers who are just starting out.

“We want to be known as a place where the musical community flourishes,” Murta said. “There is no place for musical communities right now. So the idea is to be known for that, and our strategy is to make it easy to create, and allow everyone to join the creation process.”

Indaband raised $7 million in seed funding in late 2021. The funding round included several angel investors, including Instagram co-founder Mike Krieger and former Megadeth guitar player Kiko Loureiro. The round also included funding from several Latin American VC firms, including Monashees, Astella and Upload Ventures.

The app is free and is available on iOS and Android.


Software Development in Sri Lanka

Robotic Automations

Taylor Swift's music is back on TikTok, despite platform's ongoing UMG dispute | TechCrunch


After 10 weeks of being absent from the platform, Taylor Swift’s music has returned to TikTok — or at least her more recent songs and “Taylor’s Version” cuts, since she owns those masters.

Taylor Swift’s music, and music from all artists signed to Universal Music Group, was pulled from TikTok when the two parties were unable to come to a renewed licensing agreement. UMG published a scathing press release accusing TikTok of trying to “bully” the label into accepting a deal worth less than its previous one. UMG framed its refusal to come to a deal with TikTok as a means of standing up for emerging artists.

“How did [TikTok] try to intimidate us? By selectively removing the music of certain of our developing artists, while keeping on the platform our audience-driving global stars,” UMG wrote. “TikTok’s tactics are obvious: use its platform power to hurt vulnerable artists and try to intimidate us into conceding to a bad deal that undervalues music and shortchanges artists and songwriters as well as their fans.”

TikTok did not respond to a request for comment.

UMG also represents superstars like Billie Eilish, BTS, Ariana Grande and Olivia Rodrigo, but Swift is in a unique position. After contractual disputes of her own, Swift has been re-recording her old albums to reclaim ownership of the songs. Her “Taylor’s Version” recordings are back on TikTok, but songs from records like “Reputation,” which doesn’t yet have a “Taylor’s Version,” are still absent from the platform.

The timing of Swift’s return to TikTok isn’t a coincidence. Next week, Swift will release her new album, “The Tortured Poets Department.” Even artists as huge as Swift are not immune to the necessity for social media marketing — and if fans can’t make TikToks using sounds from the new record, the album might be … slightly less ubiquitous? But the partnership is beneficial for TikTok too. With a fanbase like Swift’s, it’s inevitable that numerous audio trends will emerge from the album, and TikTok won’t want to miss out on that engagement, especially since Reels will have that music anyway.


Software Development in Sri Lanka

Robotic Automations

Spotify launches personalized AI playlists that you can build using prompts | TechCrunch


Spotify already found success with its popular AI DJ feature, and now the streaming music service is bringing AI to playlist creation. The company on Monday introduced into beta a new option called AI playlists, which allows users to generate a playlist based on written prompts.

The feature will initially become available on Android and iOS devices in the U.K. and Australia and will evolve over time.

In addition to more standard playlist creation requests, like those based on genre or time frame, Spotify’s use of AI means people could ask for a wider variety of custom playlists, like “songs to serenade my cat” or “beats to battle a zombie apocalypse,” Spotify suggests. Prompts can reference all sorts of things, like places, animals, activities, movie characters, colors or emojis. The company notes that the best playlists are generated using prompts that contain a combination of genres, moods, artists and decades, however.

Spotify also leverages its understanding of users’ tastes to customize the playlists it makes with the feature.

After the playlist is generated, users can then use the AI to revise and refine the end result by issuing commands like “less upbeat” or “more pop,” for example. Users can also swipe left on any songs to remove them from the playlist.

In terms of the technology, Spotify says it’s using large language models (LLMs) to understand the user’s intent. Then, Spotify uses its personalization technology — the information it has about the listener’s history and preferences — to fulfill the prompt and create a personalized AI-generated playlist for the user.

The company uses a range of third-party tools for its AI and machine learning experiences.

TechCrunch first reported in October 2023 that Spotify was developing AI playlists, when reverse engineers Chris Messina and Alessandro Paluzzi shared screenshots of code from Spotify’s app that referred to AI playlists that were “based on your prompts.”

Spotify at the time declined to comment on the finding, saying it would not offer a statement on possible new features. However, in December 2023, the company confirmed that it was testing AI-driven playlist creation after a TikTok video of the feature surfaced showing what the Spotify user described as “Spotify’s ChatGPT.”

Image Credits: Spotify

The feature is found in the “Your Library” tab in Spotify’s app by tapping on the plus button (+) at the top right of the screen. A pop-up menu appears showing the AI Playlist as a new option alongside the existing “Playlist” and “Blend” options.

If a listener can’t think of any prompts to try, Spotify offers prompt suggestions to help people get started, like “get focused at work with instrumental electronica,” “fill in the silence with background café music,” “get pumped up with fun, upbeat, and positive songs” or “explore a niche genre like Witch House” and many others.

To save an AI playlist, tap the “Create” button to add it to the library.

The company notes the AI has guardrails around it so it will not respond to offensive prompts or those focused on current events or specific brands.

Spotify has been investing in AI technology to improve its streaming service for many months. With the launch of AI DJ, which expanded globally last year, the company used a combination of Sonantic and OpenAI technology to create an artificial version of the voice of Spotify’s head of cultural partnerships, Xavier “X” Jernigan, who introduces personalized song selections to the user. Last year, Spotify said it was investing in in-house research to better understand the latest in AI and large language models.

CEO Daniel Ek has also teased to investors other ways Spotify could leverage AI, including by summarizing podcasts, creating AI-generated audio ads and more. The company has also looked into using AI tech that would clone a podcast host’s voice for host-read ads.

Ahead of AI playlists, Spotify launched a similar feature, Niche Mixes, that allowed users to create personalized playlists using prompts, but the product did not leverage AI technology and was more limited in terms of its language understanding.


Software Development in Sri Lanka

Robotic Automations

Beyoncé's new album 'Cowboy Carter' is a statement against AI music | TechCrunch


Beyoncé’s “Cowboy Carter” has been out for only a few days, yet it’s already obvious that we’ll be talking about it for years to come — it’s breaking records across streaming platforms, and the artist herself calls it “the best music [she’s] ever made.” But in the middle of the press release for “Cowboy Carter,” Beyoncé made an unexpected statement against the growing presence of AI in music.

“The joy of creating music is that there are no rules,” said Beyoncé. “The more I see the world evolving the more I felt a deeper connection to purity. With artificial intelligence and digital filters and programming, I wanted to go back to real instruments.”

Beyoncé rarely does interviews, giving each of her comments about the new album more significance — these remarks are among few jumping-off points fans get to help them puzzle through each element of the album, and how they all fit together. So her stance on AI isn’t just a throwaway comment made in conversation with a reporter. It’s deliberate.

The central backlash against AI-generated art comes from the way this technology works. AI-powered music generators can create new tracks in minutes and emulate artists’ vocals to a scarily convincing degree. In some cases, that’s because the AI is being trained on the work of the artists whose jobs it could end up replacing.

Large language models and diffusion models both require sprawling databases of text, images and sounds to be able to create AI-generated works. Some of the best-known AI companies, like Open AI and Stability AI, use datasets that include copyrighted artworks without consent. Even though Stability AI’s music model was trained on licensed stock music, that’s not the case for the company’s image generator, Stable Diffusion. Stability AI’s VP of Audio Ed Newton-Rex quit his job over this, because he “[doesn’t] agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use.’”

It’s no wonder artists like Beyoncé have strong feelings about this technology — too many AI models have been trained on artists’ work without their consent, and especially for rising musicians who don’t have the clout to buoy them, it will be even harder to break into an already ruthless industry. Beyoncé’s stance makes even more sense in the context of “Cowboy Carter” itself.

Though it does not explicitly discuss AI, “Cowboy Carter” already addresses the theft and appropriation of artworks without consent. On the album itself, Beyoncé is giving listeners a history lesson about how Black musicians formed the foundation of country music, which is too often assumed to represent Southern white culture.

Even the title, “Cowboy Carter,” is a nod to the appropriation of Black music for white people’s gain. Though “Carter” could reference Beyoncé’s married name, it’s also a nod to the Carters, the “first family” of country music — and those Carters took the work of Black musicians to develop the style we now know as country, which continues to exclude Black artists (just recently, an Oklahoma country radio station recently refused a listener’s request to play Beyoncé’s “Texas Hold ‘Em,” since Beyoncé didn’t fit their definition of a country artist). Beyoncé’s seemingly random stance against AI unearths a similar truth: Once again, artists’ work is being stolen without their consent and contorted into something else, leaving them without payment or credit for their cultural contributions.

There are a few moments on the album when 90-year-old country icon Willie Nelson appears on a radio show called “Smoke Hour,” and its first appearance precedes “Texas Hold ‘Em.” The placement of the track takes on an extra layer of meaning in light of the Oklahoma radio incident, and Nelson makes a slight jab: “Now for this next tune, I want y’all to sit back, inhale, and go to the good place your mind likes to wander off to. And if you don’t wanna go, go find yourself a jukebox.”

This is Beyoncé’s world: The jukebox and the radio are back in style, Black musicians can make whatever kind of music they want, and no one’s art gets stolen.




Software Development in Sri Lanka

Robotic Automations

Musical toy startup Playtime Engineering wants to simplify electronic music making for kids | TechCrunch


Troy Sheets began making music at 15 years old in his home studio with a keyboard synthesizer, drum machine and four-track cassette recorder — an impressive setup for a high school sophomore. However, it’s rare for young, up-and-coming musicians to have access to advanced equipment (other than a free app on their phone). And most adolescents can’t afford it. Plus, for someone starting out, a synthesizer can be confusing to use.

That’s why Sheets decided to develop the $199 Blipblox, an affordable kid-friendly synthesizer designed for ages 3 and up.

“I thought that there’s an opportunity to create a toy-like device that was simplified so more people could have fun using these tools that had previously been reserved for professional musicians because of their cost and complexity,” Sheets told TechCrunch.

Now, Playtime Engineering — Blipblox’s parent company — is ready to release its latest product. Called MyTracks, the new “toy-like instrument” (as the company calls it) is essentially a groovebox or electronic music production device fully decked out with a drum machine, synthesizer, built-in microphone for audio sampling and sequencer, all in one device. With its chunky control knobs and levers and an easy-to-use randomize feature, MyTracks aims to encourage music exploration and simplify beatmaking for kids. The company announced Tuesday that its Kickstarter campaign for MyTracks will open on April 9 with an expected price of $249 to $299 for backers, and the first round of products is anticipated in November. The expected retail price is $349.

The product is designed first and foremost to be kid-friendly. According to the company, all Blipblox devices underwent “rigorous” testing to ensure they are BPA-free and comply with toy safety standards. To avoid choking hazards, the plastic knobs are locked into the device so kids can’t remove them. Additionally, the batteries are secured inside a screw-down compartment.

The company says its products are the only synthesizers on the market fully certified to international child safety standards.

In terms of its design, the flashy lights and colorful buttons are meant to appeal to kids. Sheets adds that the levers are one of the most popular features since it feels like a “spaceship control panel.”

But Blipblox wants adult musicians to take it seriously as well.

“These are real musical instruments, and not just ones that look like a [toy] guitar that you press a button and it plays the same sound every time. It really does engage adults the same way that it engages kids,” says co-founder Kate Sheets.

The layout of the MyTracks machine resembles a traditional groovebox or MPC (music production center) with two effects (FX) processors, five tracks, 25 pads and over 50 acoustic, electronic and percussive instrument sounds. In addition, it has the ability to layer, record and save songs.

On the back of the MyTracks device there’s a MIDI output so professional musicians wanting to play around with a fun new toy can use it in the studio with their other gear. It also includes a stereo audio output and a USB-C for adding more tracks. Future updates will include more sound packs to provide new music styles like classical, hip-hop and EDM.

Image Credits: Playtime Engineering

More than $300 is indeed a steep price tag for a children’s toy, and not many parents are willing to cough up that much cash. However, the company argues that it can be a great tool for children to learn how to create music, manipulate sounds and experiment. Blipblox has even been used by music teachers, including helping special-needs kids express their creativity in a non-verbal way.

“[Blipblox devices] are adjustable, so you can adjust [the volume] for different sensitivities. So, neurodiverse students have really enjoyed using our products,” Kate Sheets tells us.

The company previously won the SBO (School Band and Orchestra) Best Teaching Tool for preschool students.

 

Image Credits: Playtime Engineering

“We got a lot of weird looks from parents,” Kate Sheets says, describing how people reacted to the first Blipblox synthesizer in 2018. “The music device industry looked at us and thought we were a toy, and the toy industry looked at us and thought, ‘We don’t even know what that is.’ We sort of straddled both markets for a while. And now, all these years later, we’re seeing that there really is a market for our type of products.”

Despite the initial reactions, Blipblox has managed to sell 15,000 products and raised more than $300,000 in crowdfunding to date.


Software Development in Sri Lanka

Robotic Automations

Women in AI: Emilia Gómez at the EU started her AI career with music | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Emilia Gómez is a principal investigator at the European Commission’s Joint Research Centre and scientific coordinator of AI Watch, the EC initiative to monitor the advancements, uptake and impact of AI in Europe. Her team contributes with scientific and technical knowledge to EC AI policies, including the recently proposed AI Act.

Gómez’s research is grounded in the computational music field, where she contributes to the understanding of the way humans describe music and the methods in which it’s modeled digitally. Starting from the music domain, Gómez investigates the impact of AI on human behavior — in particular the effects on jobs, decisions and child cognitive and socioemotional development.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I started my research in AI, in particular in machine learning, as a developer of algorithms for the automatic description of music audio signals in terms of melody, tonality, similarity, style or emotion, which are exploited in different applications from music platforms to education. I started to research how to design novel machine learning approaches dealing with different computational tasks in the music field, and on the relevance of the data pipeline including data set creation and annotation. What I liked at the time from machine learning was its modelling capabilities and the shift from knowledge-driven to data-driven algorithm design — e.g. instead of designing descriptors based on our knowledge of acoustics and music, we were now using our know-how to design data sets, architectures and training and evaluation procedures.

From my experience as a machine learning researcher, and seeing my algorithms “in action” in different domains, from music platforms to symphonic music concerts, I realized the huge impact that those algorithms have on people (e.g. listeners, musicians) and directed my research toward AI evaluation rather than development, in particular on studying the impact of AI on human behavior and how to evaluate systems in terms of aspects such as fairness, human oversight or transparency. This is my team’s current research topic at the Joint Research Centre.

What work are you most proud of (in the AI field)?

On the academic and technical side, I’m proud of my contributions to music-specific machine learning architectures at the Music Technology Group in Barcelona, which have advanced the state of the art in the field, as it’s reflected in my citation records. For instance, during my PhD I proposed a data-driven algorithm to extract tonality from audio signals (e.g. if a musical piece is in C major or D minor) which has become a key reference in the field, and later I co-designed machine learning methods for the automatic description of music signals in terms of melody (e.g. used to search for songs by humming), tempo or for the modeling of emotions in music. Most of these algorithms are currently integrated into Essentia, an open source library for audio and music analysis, description and synthesis and have been exploited in many recommender systems.

I’m particularly proud of Banda Sonora Vital (LifeSoundTrack), a project awarded by Red Cross Award for Humanitarian Technologies, where we developed a personalized music recommender adapted to senior Alzheimer patients. There’s also PHENICX, a large European Union (EU)-funded project I coordinated on the use of music; and AI to create enriched symphonic music experiences.

I love the music computing community and I was happy to become the first female president of the International Society for Music Information Retrieval, to which I’ve been contributing all my career, with a special interest in increasing diversity in the field.

Currently, in my role at the Commission, which I joined in 2018 as lead scientist, I provide scientific and technical support to AI policies developed in the EU, notably the AI Act. From this recent work, which is less visible in terms of publications, I’m proud of my humble technical contributions to the AI Act — I say “humble” as you may guess there are many people involved here! As an example, there’s a lot of work I contributed to on the harmonization or translation between legal and technical terms (e.g. proposing definitions grounded in existing literature) and on assessing the practical implementation of legal requirements, such as transparency or technical documentation for high-risk AI systems, general-purpose AI models and generative AI.

I’m also quite proud of my team’s work in supporting the EU AI liability directive, where we studied, among others, particular characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self- and continuous-learning capabilities, and assessed associated difficulties presented when it comes to proving causation.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

It’s not only tech — I’m also navigating a male-dominated AI research and policy field! I don’t have a technique or a strategy, as it’s the only environment I know. I don’t know how it would be to work in a diverse or a female-dominated working environment. “Wouldn’t it be nice?,” like the Beach Boys’ song goes. I honestly try to avoid frustration and have fun in this challenging scenario, working in a world dominated by very assertive guys and enjoying collaborating with excellent women in the field.

What advice would you give to women seeking to enter the AI field?

I would tell them two things:

You’re much needed — please enter our field, as there’s an urgent need for diversity of visions, approaches and ideas. For instance, according to the divinAI project — a project I co-founded on monitoring diversity in the AI field — only 23% of author names at the International Conference on Machine Learning and 29% at the International Joint Conference on AI in 2023 were female, regardless of their gender identity.

You aren’t alone — there are many women, nonbinary colleagues and male allies in the field, even though we may not be so visible or recognized. Look for them and get their mentoring and support! In this context, there are many affinity groups present in the research field. For instance, when I became president of the International Society for Music Information Retrieval, I was very active in the Women in Music Information Retrieval initiative, a pioneer in diversity efforts in music computing with a very successful mentoring program.

What are some of the most pressing issues facing AI as it evolves?

In my opinion, researchers should devote as many efforts to AI development as to AI evaluation, as there’s now a lack of balance. The research community is so busy advancing the state of the art in terms of AI capabilities and performance and so excited to see their algorithms used in the real world that they forget to do proper evaluations, impact assessment and external audits. The more intelligent AI systems are, the more intelligent their evaluations should be. The AI evaluation field is under-studied, and this is the cause of many incidents that give AI a bad reputation, e.g. gender or racial biases present in data sets or algorithms.

What are some issues AI users should be aware of?

Citizens using AI-powered tools, like chatbots, should know that AI is not magic. Artificial intelligence is a product of human intelligence. They should learn about the working principles and limitations of AI algorithms to be able to challenge them and use them in a responsible way. It’s also important for citizens to be informed about the quality of AI products, how they are assessed or certified, so that they know which ones they can trust.

What is the best way to responsibly build AI?

In my view, the best way to develop AI products (with a good social and environmental impact and in a responsible way) is to spend the needed resources on evaluation, assessment of social impact and mitigation of risks — for instance, to fundamental rights — before placing an AI system in the market. This is for the benefit of businesses and trust on products, but also of society.

Responsible AI or trustworthy AI is a way to build algorithms where aspects such as transparency, fairness, human oversight or social and environmental well-being need to be addressed from the very beginning of the AI design process. In this sense, the AI Act not only sets the bar for regulating artificial intelligence worldwide, but it also reflects the European emphasis on trustworthiness and transparency — enabling innovation while protecting citizens’ rights. This I feel will increase citizen trust in the product and technology.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber