Maddyness

Vers un monde sans armes créées par l’intelligence artificielle

Comments
Partager :
Up
Partie précédente
1 — Des milliers d’experts en intelligence artificielle s’engagent à ne pas participer à la création d’armes
Partie suivante
Down
Maddyness
Up
Menu
Partie 1
Down
1 — Des milliers d’experts en intelligence artificielle s’engagent à ne pas participer à la création d’armes
2 — Artificial Intelligence Experts Warn of ‘Dystopian Future With Robots Flying Around Killing Everybody’
3 — China and the US are racing to develop AI weapons
4 — Will artificial intelligence create killer robots or steal our jobs? Here’s how a Singapore bank deals with the ethics of AI
5 — Forget Killer Robots: Autonomous Weapons Are Already Online
Technologies

Vers un monde sans armes créées par l’intelligence artificielle

Pepper Pepper Pepper
2160 - trending  |  
Comments
Par Geraldine Russell - 20 juillet 2018 / 07H00

Chaque vendredi, dans sa revue de presse, Maddyness vous propose une sélection d’articles sur un sujet chaud qui ont retenu l’attention de la rédaction. Cette semaine, les experts de l'intelligence artificielle s'engagent à ne pas la mettre au service d'armes létales.

Des milliers d’experts en intelligence artificielle s’engagent à ne pas participer à la création d’armes

Les faits

« La décision de prendre une vie humaine ne devrait jamais être déléguée à une machine. » Dans une lettre ouverte publiée mercredi 18 juillet, plus de 2 400 chercheurs, ingénieurs et entrepreneurs dans le secteur de l’intelligence artificielle (IA) se sont engagés à « ne jamais participer ou soutenir le développement, la fabrication, le commerce ou l’usage d’armes létales autonomes ». Parmi les signataires, le PDG de Tesla, Elon Musk (connu pour ses déclarations alarmantes sur les dangers de l’IA), les dirigeants de Google DeepMind (l’entreprise de pointe à l’origine de la victoire de la machine sur l’humain au jeu de go), et d’éminentes personnalités du secteur, telles que Stuart Russell, Yoshua Bengio ou encore Toby Walsh. Lire la suite sur Le Monde

Artificial Intelligence Experts Warn of ‘Dystopian Future With Robots Flying Around Killing Everybody’

Le décryptage

As AI technology has continued to advance, the United Nations has convened a group of governmental experts to address mounting concerns raised by human rights organizations, advocacy groups, military leaders, lawmakers, and tech experts—many who, for years, have demanded a global ban on killer robots. In recent years, tech experts have used IJCAI as an opportunity to pressure world leaders to outlaw autonomous weapons which, as the new pledge warns, “could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.” Without a ban on such weaponry, they “could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage.Lire la suite sur MintPressNews

China and the US are racing to develop AI weapons

Le contexte

When Google’s AlphaGo defeated the Chinese grandmaster at a game of Go in 2017, China was confronted with its own “Sputnik moment”: a prompt to up its game on the development of artifical intelligence (AI). Sure enough, Beijing is pursuing launch a national-level AI innovation agenda for “civil-military fusion”. It’s part of China’s ambitious quest to become a “science and technology superpower” – but also a new front in an increasingly worrisome arms race. In 2017, the Chinese president, Xi Jinping explicitly called for the acceleration of military AI research to better prepare China for future warfare against a major adversary such as the US. China’s approach to AI has been heavily influenced by its assessment of US military initiatives, in particular the Pentagon’s Third Offset Strategy, an Obama-era plan that gave the Pentagon a mandate to experiment with cutting-edge weapons technologies, AI among them. Lire la suite sur The Conversation

Will artificial intelligence create killer robots or steal our jobs? Here’s how a Singapore bank deals with the ethics of AI

L’exemple

The Singapore government recently announced that it is setting up an advisory council to delve into the ethical use of Artificial Intelligence (AI). The Advisory Council will assist the Government to develop ethics standards and reference governance frameworks, issue advisory guidelines, practical guidance and codes of practice for voluntary adoption by businesses. OCBC Bank also launched its own AI unit earlier this year with an initial investment budget of S$10 million over three years to strategically develop in-house capabilities. Broadly, these are the four key principles that steer the bank’s development in AI. Lire la suite sur Business Insider Singapour

Forget Killer Robots: Autonomous Weapons Are Already Online

La (vraie) menace

Earlier this year, concerns over the development of autonomous military systems — essentially AI-driven machinery capable of making battlefield decisions, including the selection of targets — were once again the center of attention at a United Nations meeting in Geneva. “Where is the line going to be drawn between human and machine decision-making?” Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security in Washington, D.C., told Time magazine. “Are we going to be willing to delegate lethal authority to the machine?” As worrying as that might seem, however, killer soldier-robots are largely the stuff of science fiction (for now) — quite unlike a much bigger threat that is already upon us: cyber weapons that can operate with a great deal of autonomy and have the potential to crash financial networks and disable power grids all on their own. Lire la suite sur Undark

Par

Geraldine Russell

20 juillet 2018 / 07H00
mis à jour le 19 juillet 2018
Nos Articles les plus lus
Nos derniers articles
Business
Menu
Entrepreneurs
Menu
Finance
Menu
Innovation
Menu
Technologies
Menu
MaddyShop
MaddyShop
Agenda
Agenda
MaddyEvent
MaddyEvent
MaddyJobs
MaddyJobs
MaddyStudio
MaddyStudio
S'abonner à notre newsletter
À propos
Mentions Légales
Articles les plus consultés
>
Search
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Search
Nos services
Les catégories
Maddynews
Recevez le résumé de nos articles directement dans votre boîte mail
Hmm... Il y a visiblement eu un soucis :(
Maddyness

Comme vous le savez, Maddyness est à la pointe de l'innovation.
Malheureusement il semble que votre navigateur ne le soit pas encore...

Pour une bonne expérience de navigation
(et être au top de la modernité) pensez à passer sur :
Chrome
Chrome
Safari
Safari
Firefox
Firefox
Edge
Edge