From Digital Age to Nano Age. WorldWide.

Tag: election

Robotic Automations

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears | TechCrunch


Meta has been banned from launching features on Facebook and Instagram that would have collected data on voters in Spain using the social networks ahead of next month’s European Elections. The local data protection authority, the AEPD, has used emergency powers to protect local users’ privacy. Meta confirmed to TechCrunch it has complied with the […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Microsoft and OpenAI launch $2M fund to counter election deepfakes | TechCrunch


Microsoft and OpenAI have announced a $2 million fund to combat the growing risks of AI and deepfakes being used to “deceive the voters and undermine democracy.” This year will see a record 2 billion people head to the polls in elections spanning some 50 countries, so there are concerns around the influence that AI […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

India urges political parties to avoid using deepfakes in election campaigns | TechCrunch


India’s Election Commission has issued an advisory to all political parties, urging them to refrain from using deepfakes and other forms of misinformation in their social media posts during the country’s ongoing general elections. The move comes after the constitutional body faced criticism for not doing enough to combat such campaigns in the world’s most populous nation.

The advisory, released on Monday (PDF), requires political parties to remove any deepfake audio or video within three hours of becoming aware of its existence. Parties are also advised to identify and warn the individuals responsible for creating the manipulated content. The Election Commission’s action follows a Delhi High Court order asking the body to resolve the matter after the issue was raised in a petition.

India, home to over 1.5 billion people, began its general elections on April 19, with the voting process set to conclude on June 1. The election has already been marred by controversies surrounding the use of deepfakes and misinformation.

Prime Minister Narendra Modi complained late last month about the use of fake voices to purportedly show leaders making statements they had “never even thought of,” alleging that this was part of a conspiracy designed to sow tension in society.

The Indian police have arrested at least six people from the social media teams of the Indian National Congress, the nation’s top opposition party, for circulating a fake video showing Home Minister Amit Shah making statements he claims he never made.

India has been grappling about the use and spread of deepfakes for several months now. Ashwini Vaishnaw, India’s IT Minister, met large social media companies, including Meta and Google, in November, and “reached a consensus” that regulation was needed to better combat the spread of deepfake videos as well as apps that facilitate their creation.

Another IT Minister in January warned tech companies of severe penalties, including bans, if they failed to take active measures against deepfake videos. The nation is yet to codify its draft regulation on deepfakes into law.

The Election Commission said on Monday it has been “repeatedly directing” the political parties and their leaders to “maintain decorum and utmost restraint in public campaigning.”


Software Development in Sri Lanka

Robotic Automations

Google expands passkey support to its Advanced Protection Program ahead of the US presidential election | TechCrunch


Ahead of the U.S. presidential election, Google is bringing passkey support to its Advanced Protection Program (APP), which is used by people who are at high risk of targeted attacks, such as campaign workers, candidates, journalists, human rights workers, and more.

APP traditionally required the use of hardware security keys, but soon users can enroll in APP with passkeys. Users will have the option to use passkeys alone, or alongside a password or hardware security key.

“In a critical election year, we’ll be bringing this feature to our users who need it most, and continue to work with experts like Defending Digital Campaigns, the International Foundation for Electoral Systems, Asia Centre, Internews, and Possible to help protect global high-risk users,” Google’s VP of Security Engineering, Heather Adkins, said in a blog post.

Google says passkeys have been used to authenticate users more than one billion times across over 400 million Google Accounts since the company launched passkey support in 2022. Google says passkeys are used on Google Accounts more often than legacy forms of two-step verification, such as SMS one-time passwords and app-based one-time passwords combined.

Passkey logins make it harder for bad actors to remotely access your accounts since they would also need physical access to a phone. Passkeys also remove the need to rely on username and password combinations, which can be susceptible to phishing.

The technology has been adopted by numerous other companies, including Apple, Amazon, X (formerly Twitter), PayPal, WhatsApp, GitHub and TikTok.

Google also announced that it’s expanding its Cross-Account Protection program, which shares security notifications about suspicious activity with the third-party apps you’ve connected to your Google account. The company says this helps prevent cybercriminals from gaining access to one of your accounts and using it to infiltrate others. Google notes that it’s protecting 2.4 billion accounts across 3.4 million apps and sites and that it’s growing its collaborations across the industry.


Software Development in Sri Lanka

Robotic Automations

Meta's approach to election security in the frame as EU probes Fb, Instagram | TechCrunch


The European Union announced Tuesday it suspects Meta’s social networking platforms, Facebook and Instagram, of breaking the bloc’ rules for larger platforms in relation to election integrity.

The Commission has opened the formal infringement proceedings to investigate Meta under the the Digital Services Act (DSA), an online governance and content moderation framework. Reminder: Penalties for confirmed breaches of the regime can include fines of up to 6% of global annual turnover.

The EU’s concerns here span several areas: Meta’s moderation of political ads — which it suspects is inadequate; Meta’s policies for moderating non-paid political content, which the EU suspects are opaque and overly restrictive, whereas the DSA demands platforms’ policies deliver transparency and accountability; and Meta’s policies that relate to enabling outsiders to monitor elections.

The EU’s proceeding also targets Meta’s processes for users to flag illegal content, which it’s concerned aren’t user friendly enough; and its internal complaints handling system for content moderation decisions, which it also suspects are ineffective.

“When Meta get paid for displaying advertising it doesn’t appear that they have put in place effective mechanism of content moderation,” said a Commission official briefing journalists on background on the factors that led it to open the bundle of investigations. “Including for advertisements that could be generated by a generative AI — such as, for example, deep fakes — and these have been exploited or appear to have be exploited by malicious actors for foreign interference.”

The EU is drawing on some independent research, itself enabled by another DSA requirement that large platforms publish a searchable ad archive, which it suggested has shown Meta’s ad platform being exploited by Russian influence campaigns targeting elections via paid ads. It also said it’s found evidence of a lack of effective ads moderation by Meta being generally exploited scammers — with the Commission pointing to a surge in financial scam ads on the platform.

On organic (non-paid) political content, the EU said Meta seems to limit the visibility of political content for users by default but does does not appear to provide sufficient explanation — either of how it identifies content as political nor how moderation is done. The Commission also said it had found evidence to suggest Meta is shadowbanning (aka limiting the visibility/reach) certain accounts with high volumes of political posting.

If confirmed, such actions would be a breach of the DSA as the regulation puts a legal obligation on platforms to transparently communicate the policies they apply to their users.

On election monitoring, the EU is particularly concerned about Meta’s recent decision to shutter access to CrowdTangle, a tool researchers have previously been able to use for real-time election monitoring.

It’s not opened an investigation on this yet but has sent Meta an urgent formal request for information (RFI) about its decision to deprecate the research tool — giving the company five days to respond. Briefing journalists about the development, Commission officials suggested they could take more action in this area, such as opening a formal investigation, depending on Meta’s response.

The short deadline for a response clearly conveys a sense of urgency. Last year, soon after the EU took up the baton overseeing larger platforms’ DSA compliance with a subset of transparency and risk mitigation rules, the Commission named election integrity as one of its priority areas for its enforcement of the regulation.

During today’s briefing, Commission officials pointed to the upcoming European elections in June — questioning the timing of Meta’s decision to deprecate CrowdTangle. “Our concern — and this is also why we consider this to be a particular urgent issue — is that just a few weeks ahead of the European election Meta has decided to deprecate this tool, which has allowed journalists… civil society actors and researchers in, for example, the 2020 US elections, to monitor election related risks.”

The Commission is worried another tool Meta has said will replace CrowdTangle does not have equivalent/superior capabilities. Notably the EU is concerned it will not let outsiders monitor election risks in real-time. Officials also raised concerns about slow onboarding for Meta’s new tool.

“At this point we’re requesting information from Meta on how they intend to remedy this the lack of real time election monitoring tool,” said one senior Commission official during the briefing. “We are also requesting some additional documents from them on the decision that has led them to deprecate Crowdtangle and their assessment on the capabilities of the new tool.”

Meta was contacted for comment about the Commission’s actions. In a statement a company spokesperson said: “We have a well-established process for identifying and mitigating risks on our platforms. We look forward to continuing our cooperation with the European Commission and providing them with further details of this work.”

These are the first formal DSA investigations Meta has faced — but not the first RFIs. Last year the EU sent Meta a flurry of requests for information — including in relation to the Israel-Hamas war, election security and child safety, among others.

In light of the variety of information requests on Meta platforms, the company could face additional DSA investigations as Commission enforcers work through multiple submissions.


Software Development in Sri Lanka

Robotic Automations

India's election overshadowed by the rise of online misinformation | TechCrunch


As India kicks off the world’s biggest election, which starts on April 19 and runs through June 1, the electoral landscape is overshadowed by misinformation.

The country — which has more than 830 million internet users and is home to the largest user base for social media platforms like Facebook and Instagram — is already at the highest risk of misinformation and disinformation, according to the World Economic Forum. AI has complicated the situation further, including deepfakes created with generative AI.

Misinformation is not just a problem for election fairness — it can have deadly effects, including violence on the ground and increase hatred for minorities.

Pratik Sinha, the co-founder of the Indian non-profit fact-checking website Alt News, says there’s been an increase in the deliberate creation of misinformation to polarize society. “Ever since social media has been thriving, there is a new trend where you use misinformation to target communities,” he said.

The country’s vast diversity in language and culture also make it particularly hard for fact-checkers to review and filter out misleading content.

“India is unusual in its size and its history of democracy,” Angie Drobnic Holan, the director of International Fact-Checking Network, told TechCrunch in an interview. “When you have got a lot of misinformation, you have a lot of need for fact-checking, and things that make the Indian environment more complex also are the many languages of India.”

The government has taken steps against the problem, but some critics argue that enforcement is weak, and the Big Tech platforms aren’t helping enough.

In 2022, the Indian government updated its IT intermediary rules to require social media companies to remove misleading content from their platforms within 72 hours of being reported. However, the results are unclear, and some digital advocacy groups, including the Internet Freedom Foundation, have noticed selective enforcement.

“You don’t want to have laws or rules that are so vague, that are so broad that they can be interpreted,” said Prateek Waghre, executive director of the Internet Freedom Foundation.

Google and Meta have made announcements about limiting misleading content on their platforms during Indian elections, and restricted their AI bots from answering election queries, but have announced no significant product-related changes or stringent actions against fake news. Moreover, just before the Indian election, Meta reportedly cut funding to news organizations for fact-checking on WhatsApp.

Now fake news is proliferating on social media. Doctored videos of celebrities asking citizens to vote for a particular political party and fake news about the Model Code of Conduct applied to public programs and private chats were well spread online before the election began.

Hamsini Hariharan, a subject matter expert at the U.K.-based fact-checking startup Logically, told TechCrunch about the trend of “cheapfakes” — content generated with less sophisticated measures of altering images, videos, and audio — being widely shared across social media platforms in India.

Last week, 11 civil society organizations in India, including the nonprofit digital rights groups Internet Freedom Foundation and Software Freedom Law Center (SFLC.in), urged the Indian election commission to hold political candidates and social media platforms accountable for any misuse.

Hariharan underlined that the scale and sophistication of misinformation and disinformation have drastically increased over the last five years since India’s last general election in 2019. The key reasons, she believes, are the increase in internet penetration — it’s grown from 14% in 2014 to around 50% now, according to World Bank data — and the availability of technologies to manipulate audiovisual messages, low media literacy, and the mainstream media losing some of its credibility.

Logically noticed a particular spike in attempts to cast doubt about electronic voting machines. Its fact-checkers saw older claims, particularly videos and text from Supreme Court hearings about voting machines, being circulated without sufficient context. There were even some posts about these machines being banned, faulty or tempered with, along with hashtags such as #BanEVM circulated among Facebook groups with thousands of followers.

Sinha of Alt News agreed that misleading online content has rapidly risen in the country. He noted that social media companies are not helping to limit such content on their platforms.

“Is there a single report that’s been published in four years as to how their fact-checking enterprise is doing? No, nothing, because they know it is not working. If it was working, they would have gone to town with it, but they know it’s not working,” he told TechCrunch.

Holan believes there is much room for product changes that emphasize accuracy and reliability.

“The platforms invested heavily during COVID in trust and safety programs. And since then, there’s clearly been a pullback,” she said.

Meta and X did not answer why there have been no significant product-related updates to restrict misleading content and the amount of investments made for fact-checking in India. However, a Meta spokesperson noted the existence of a WhatsApp tip line, which was launched in late March, and an awareness campaign on Instagram to identify and stop misinformation using the platform’s built-in features.

“We have a multi-pronged approach to tackling misinformation that includes building an industry-leading network of fact-checkers in the country, including training them on tackling AI-generated misinformation,” the Meta spokesperson said in an emailed statement.

X did not answer a detailed questionnaire sent to the generic press email ID but said, “Busy now, please check back later.”




Software Development in Sri Lanka

Robotic Automations

India, grappling with election misinfo, weighs up labels and its own AI safety coalition | TechCrunch


India, long in the tooth when it comes to co-opting tech to persuade the public, has become a global hot spot when it comes to how AI is being used, and abused, in political discourse, and specifically the democratic process. Tech companies, who built the tools in the first place, are making trips to the country to push solutions.

Earlier this year, Andy Parsons, a senior director at Adobe who oversees its involvement in the cross-industry Content Authenticity Initiative (CAI), stepped into the whirlpool when he made a trip to India to visit with media and tech organizations in the country to promote tools that can be integrated into content workflows to identify and flag AI content.

“Instead of detecting what’s fake or manipulated, we as a society, and this is an international concern, should start to declare authenticity, meaning saying if something is generated by AI that should be known to consumers,” he said in an interview.

Parsons added that some Indian companies — currently not part of a Munich AI election safety accord signed by OpenAI, Adobe, Google and Amazon in February — intended to construct a similar alliance in the country.

“Legislation is a very tricky thing. To assume that the government will legislate correctly and rapidly enough in any jurisdiction is something hard to rely on. It’s better for the government to take a very steady approach and take its time,” he said.

Detection tools are famously inconsistent, but they are a start in fixing some of the problems, or so the argument goes.

“The concept is already well understood,” he said during his Delhi trip. “I’m helping raise awareness that the tools are also ready. It’s not just an idea. This is something that’s already deployed.”

Andy Parsons, senior director at Adobe. Image Credits: Adobe

The CAI — which promotes royalty-free, open standards for identifying if digital content was generated by a machine or a human — predates the current hype around generative AI: It was founded in 2019 and now has 2,500 members, including Microsoft, Meta, and Google, The New York Times, The Wall Street Journal and the BBC.

Just as there is an industry growing around the business of leveraging AI to create media, there is a smaller one being created to try to course-correct some of the more nefarious applications of that.

So in February 2021, Adobe went one step further into building one of those standards itself and co-founded the Coalition for Content Provenance and Authenticity (C2PA) with ARM, BBC, Intel, Microsoft and Truepic. The coalition aims to develop an open standard, which taps the metadata of images, videos, text and other media to highlight their provenance and tell people about the file’s origins, the location and time of its generation, and whether it was altered before it reached the user. The CAI works with C2PA to promote the standard and make it available to the masses.

Now it is actively engaging with governments like India’s to widen the adoption of that standard to highlight the provenance of AI content and participate with authorities in developing guidelines for AI’s advancement.

Adobe has nothing but also everything to lose by playing an active role in this game. It’s not — yet — acquiring or building large language models (LLMs) of its own, but as the home of apps like Photoshop and Lightroom, it’s the market leader in tools for the creative community, and so not only is it building new products like Firefly to generate AI content natively, but it is also infusing legacy products with AI. If the market develops as some believe it will, AI will be a must-have in the mix if Adobe wants to stay on top. If regulators (or common sense) have their way, Adobe’s future may well be contingent on how successful it is in making sure what it sells does not contribute to the mess.

The bigger picture in India in any case is indeed a mess.

Google focused on India as a test bed for how it will bar use of its generative AI tool Gemini when it comes to election content; parties are weaponizing AI to create memes with likenesses of opponents; Meta has set up a deepfake “helpline” for WhatsApp, such is the popularity of the messaging platform in spreading AI-powered missives; and at a time when countries are sounding increasingly alarmed about AI safety and what they have to do to ensure it, we’ll have to see what the impact will be of India’s government deciding in March to relax rules on how new AI models are built, tested and deployed. It’s certainly meant to spur more AI activity, at any rate.

Using its open standard, the C2PA has developed a digital nutrition label for content called Content Credentials. The CAI members are working to deploy the digital watermark on their content to let users know its origin and whether it is AI-generated. Adobe has Content Credentials across its creative tools, including Photoshop and Lightroom. It also automatically attaches to AI content generated by Adobe’s AI model Firefly. Last year, Leica launched its camera with Content Credentials built in, and Microsoft added Content Credentials to all AI-generated images created using Bing Image Creator.

Image Credits: Content Credentials

Parsons told TechCrunch the CAI is talking with global governments on two areas: one is to help promote the standard as an international standard, and the other is to adopt it.

“In an election year, it’s especially critical for candidates, parties, incumbent offices and administrations who release material to the media and to the public all the time to make sure that it is knowable that if something is released from PM [Narendra] Modi’s office, it is actually from PM Modi’s office. There have been many incidents where that’s not the case. So, understanding that something is truly authentic for consumers, fact-checkers, platforms and intermediaries is very important,” he said.

India’s large population, vast language and demographic diversity make it challenging to curb misinformation, he added, a vote in favor of simple labels to cut through that.

“That’s a little ‘CR’ … it’s two western letters like most Adobe tools, but this indicates there’s more context to be shown,” he said.

Controversy continues to surround what the real point might be behind tech companies supporting any kind of AI safety measure: Is it really about existential concern, or just having a seat at the table to give the impression of existential concern, all the while making sure their interests get safeguarded in the process of rule making?

“It’s generally not controversial with the companies who are involved, and all the companies who signed the recent Munich accord, including Adobe, who came together, dropped competitive pressures because these ideas are something that we all need to do,” he said in defense of the work.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber