New Delhi, March 23 (IANS) Considering that more than 60 countries, including India, are entering election mode this year, it is vital that we remain vigilant on recent trends in the dynamic digital landscape, especially deepfakes, says Ivana Bartoletti, Global Chief Privacy and AI Governance Officer at Wipro.
With the widespread use of generative AI, we face a new and concerning threat: deepfakes.
“Deepfakes have become accessible to everyone, posing a significant risk as these manipulations allow the creation and dissemination of realistic audio and video content featuring individuals saying and doing things they never actually said or did,” emphasised Bartoletti, also the founder of the ‘Women Leading in AI Network’.
The consequences extend beyond the digital realm, as online disinformation and coordination can spill over into real-world violence.
In India, the government has issued an update to its AI advisory, saying that the big digital companies do not need the government’s permission anymore before launching any AI model in the country.
However, big tech companies are advised to label “under-tested and unreliable AI models to inform users of their potential fallibility or unreliability.”
“Under-tested/unreliable Artificial Intelligence foundational models)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated,” according to the new MeitY advisory.
All intermediaries or platforms must ensure that the use of AI models /LLM/Generative AI, software or algorithms “does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act.”
The digital platforms have been asked to comply with new AI guidelines with immediate effect.
According to Bartoletti, to ensure public safety, companies must take responsibility and implement measures to combat deepfakes and disinformation.
“This includes investing in advanced detection technologies to identify and flag deepfake content, as well as collaborating with experts to develop effective debunking methods,” she noted.
Additionally, promoting media literacy and critical thinking among the public is crucial.
“By taking proactive steps to address the risks of deepfakes, we can protect the integrity of elections and uphold the democratic process,” said Bartoletti.
–IANS
na/dan
Disclaimer
The information contained in this website is for general information purposes only. The information is provided by TodayIndia.news and while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
In no event will we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this website.
Through this website you are able to link to other websites which are not under the control of TodayIndia.news We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, TodayIndia.news takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.
For any legal details or query please visit original source link given with news or click on Go to Source.
Our translation service aims to offer the most accurate translation possible and we rarely experience any issues with news post. However, as the translation is carried out by third part tool there is a possibility for error to cause the occasional inaccuracy. We therefore require you to accept this disclaimer before confirming any translation news with us.
If you are not willing to accept this disclaimer then we recommend reading news post in its original language.