Thursday, April 9, 2020
News

IBM bats for regulating AI based on accountability,security

   SocialTwist Tell-a-Friend    Print this Page   COMMENT

San Francisco | Thursday, 2020 1:15:05 AM IST
IT major IBM has released a regulatory framework for organisations involved in developing or using Artificial Intelligence based on accountability, transparency, fairness and security.

These IBM recommendations come as the new European Commission has indicated that it will legislate on AI within the first 100 days of 2020 and the White House has released new guidelines for regulation of AI.

The "Precision Regulation for Artificial Intelligence" released by The IBM Policy Lab builds upon IBM's calls for a "precision regulation" approach to facial recognition and illegal online content - laws tailored to hold companies more accountable, without becoming over-broad in a way that hinders innovation or the larger digital economy.

Specifically, IBM's new policy paper outlines five policy imperatives for companies, whether they are providers or owners of AI systems that can be reinforced by regulation.

To ensure compliance with these expectations, providers and owners should designate a person responsible for trustworthy AI, such as a lead AI ethics official.

All entities providing or owning an AI system should conduct an initial high-level assessment of the technology's potential for harm. And regulation should treat different use cases differently based on the possible inherent risk.

The best way to promote transparency is through disclosure, making the purpose of an AI system clear to consumers and businesses, according to the regulatory framework.

No one should be tricked into interacting with AI, it added.

Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualise how and why it arrived at a particular conclusion.

All organisations in the AI developmental lifecycle have some level of shared responsibility in ensuring the AI systems they design and deploy are fair and secure.

This requires testing for fairness, bias, robustness and security, and taking remedial actions as needed, both before sale or deployment and after it is operationalised.

This should be reinforced through "co-regulation", where companies implement testing and government conducts spot checks for compliance, IBM said.

--IANS gb/bg

( 348 Words)

2020-01-22-19:20:13 (IANS)

 
  LATEST COMMENTS ()
POST YOUR COMMENT
Comments Not Available
 
POST YOUR COMMENT
 
 
TRENDING TOPICS
 
 
CITY NEWS
MORE CITIES
 
 
 
MORE SCIENCE NEWS
Google makes Stadia Pro free for 2 month...
Bengaluru most preferred city to work fo...
Demand for OTT content, educational, fit...
Airtel Xstream offers free unlimited acc...
Realme offers support for 25,000 people ...
'Smart Tech' can help India manage globa...
More...
 
INDIA WORLD ASIA
Delhi reports 93 new COVID-19 positive c...
IT Dept directed to release tax refunds ...
3 Bangladeshi infiltrators arrested in B...
Telangana issues guidelines for body dis...
Odisha's Ganjam district to conduct door...
South Central Railways converts coaches ...
More...    
 
 Top Stories
Man assaults 2 female doctors of Sa... 
Not just India, Jamaat a super spre... 
'Don't politicize this virus': Tedr... 
Gujarat to provide free ration to 6... 
Deeply concerned about safety of Si... 
South Central Railways converts coa... 
Models project 61,000 COVID-19 deat... 
Over 800 people stranded in virus-h...