The British data regulator is set to analyze the privacy implications of processing scraped data used for training generative artificial intelligence algorithms. The Information Commissioner's Office is soliciting comments from AI developers, legal experts and other industry stakeholders.
Federal agencies are making significant headway in achieving a series of critical cybersecurity milestones included in a sweeping executive order on artificial intelligence the president signed in October 2023, according to White House Special Advisor for AI Ben Buchanan.
Artificial intelligence-enabled voter misinformation campaigns and voter database hacking are some of the largest threats to election security in a year when more than half of the world's populace will take to the ballot box in elections ranging from free to flawed.
In the latest weekly update, ISMG editors discussed how the surge in API usage poses challenges for organizations, why good governance is so crucial to solving API issues and how The New York Times' legal action against OpenAI and Microsoft highlights copyright concerns.
In a year in which the financial impact of cyberattacks has more than doubled to $1.4 million, organizations are exploring generative artificial intelligence but so far mostly sticking to machine learning, Dell reported on Tuesday after surveying 1,500 IT and security decision-makers.
The European Commission took preliminary steps toward investigating Microsoft's financial interest in ChatGPT maker OpenAI under the trading bloc's antitrust regulation. The Tuesday announcement marks the second instance of official interest in Microsoft's investments in the generative AI firm.
Hewlett Packard Enterprise announced a $14 billion acquisition deal with networking equipment maker Juniper Networks and is touting the deal as a way to position the Silicon Valley stalwart for the burgeoning artificial intelligence market. The transaction values Juniper at $40 per share.
ChatGPT maker OpenAI acknowledged that it would be "impossible" to develop generative artificial intelligence systems without using copyrighted material. The company defended its use of copyrighted material, stating that current copyright law does not forbid training data.
Alex Zeltcer, CEO and co-founder at nSure.ai, believes more companies are using AI and gen AI to create synthetic data that will be used to identify fraudulent groups who target online shoppers and gamers. He also observes social engineering at scale, perpetrated by machines, to conduct fraud.
In a solicitation for synthetic data generators, the U.S. federal government is looking for a machine that can generate fake data for real-world scenarios, such as identifying cybersecurity threats. Synthetic data can boost the accuracy of machine learning models or be used to test systems.
Machine learning systems are vulnerable to cyberattacks that could allow hackers to evade security and prompt data leaks, scientists at the National Institute of Standards and Technology warned. There is "no foolproof defense" against some of these attacks, researchers said.
There are many potential uses for generative AI at financial services firms, but few are more promising than those in the areas of risk and fraud, said Kristine Demareski, vice president of payments at Genpact, which is already harnessing AI to increase efficiencies in analysts' decision-making.
AI, machine learning and large language models are not new, but they are coming to fruition with the mass adoption of generative AI. For cybersecurity professionals, these are "exciting times we live in," said Dan Grosu, CTO and CISO at Information Security Media Group.
The National Institute of Standards and Technology is failing to provide adequate information about how it plans to award funding opportunities to research institutions and private organizations through a newly established Artificial Intelligence Safety Institute, according to a group of lawmakers.
Healthcare CISOs must recognize the real and imminent threat of AI-fueled cyberattacks and take proactive steps, including the deployment of AI-based security tools, to protect patient data and critical healthcare services, said Troy Hawes, managing director at consulting firm Moss Adams.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.in, you agree to our use of cookies.