Chinese AI startup DeepSeek overtakes ChatGPT on Apple App Store

Reuters 27.01.25

Next to the US models, Chinese AI outperforms at a much cheaper rate:

‘Chinese startup DeepSeek's AI Assistant on Monday overtook rival ChatGPT to become the top-rated free application available on Apple's App Store in the United States. Powered by the DeepSeek-V3 model, which its creators say "tops the leaderboard among open-source models and rivals the most advanced closed-source models globally", the artificial intelligence application has surged in popularity among U.S. users since it was released on Jan. 10, according to app data research firm Sensor Tower. The milestone highlights how DeepSeek has left a deep impression on Silicon Valley, upending widely held views about U.S. primacy in AI and the effectiveness of Washington's export controls targeting China's advanced chip and AI capabilities.’

AI means the end of internet search as we’ve known it

Technology Review 06.01.25

A curated version of reality is the most dangerous scenario that AI is delivering yet:

‘The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”’

Most Supposedly ‘Open’ AI Systems Are Actually Closed—and That’s a Problem

Singularity Hub 30.11.24

What’s in a name?:

‘“Open” AI models have a lot to give. The practice of sharing source code with the public spurs innovation and democratizes AI as a tool. Or so the story goes. A new analysis in Nature puts a twist on the narrative: Most supposedly “open” AI models, such as Meta’s Llama 3, are hardly that. Rather than encouraging or benefiting small startups, the “rhetoric of openness is frequently wielded in ways that…exacerbate the concentration of power” in large tech companies, wrote David Widder at Cornell University, Meredith Whittaker at Signal Foundation, and Sarah West at AI Now Institute. Why care? Debating AI openness seems purely academic. But with growing use of ChatGPT and other large language models, policymakers are scrambling to catch up. Can models be allowed in schools or companies? What guiderails should be in place to protect against misuse?’

Amazon, Google and Meta are ‘pillaging culture, data and creativity’ to train AI, Australian inquiry finds

The Guardian 27.11.24

Data collecting tech companies don’t seem to be enjoying impunity in Australia:

‘Tech companies Amazon, Google and Meta have been criticised by a Senate select committee inquiry for being especially vague over how they used Australian data to train their powerful artificial intelligence products… Sheldon said Australia needed “new standalone AI laws” to “rein in big tech” and that existing laws should be amended as necessary. “They want to set their own rules, but Australians need laws that protect rights, not Silicon Valley’s bottom line,” he said. He said Amazon had refused during the inquiry to disclose how it used data recorded from Alexa devices, Kindle or Audible to train its AI. Google too, he said, had refused to answer questions about what user data from its services and products it used to train its AI products. Meta admitted it had been scraping from Australian Facebook and Instagram users since 2007, in preparation for future AI models. But the company was unable to explain how users could consent for their data to be used for something that did not exist in 2007. Sheldon said Meta dodged questions about how it used data from its WhatsApp and Messenger products.’

How the largest gathering of US police chiefs is talking about AI

Technology Review 19.11.24

A pick and choose systems for the way the police act and enforce their duties, is indeed chaotic:

'My biggest takeaway from the conference was simply that the way US police are adopting AI is inherently chaotic. There is no one agency governing how they use the technology, and the roughly 18,000 police departments in the United States—the precise figure is not even known—have remarkably high levels of autonomy to decide which AI tools they’ll buy and deploy. The police-tech companies that serve them will build the tools police departments find attractive, and it’s unclear if anyone will draw proper boundaries for ethics, privacy, and accuracy… Without federal regulation on how police departments can and cannot use AI, the lines will be drawn by departments and police-tech companies themselves. “Ultimately, these are for-profit companies, and their customers are law enforcement,” says Stanley. “They do what their customers want, in the absence of some very large countervailing threat to their business model.”’

‘An existential threat’: anger over UK government plans to allow AI firms to scrape content

The Guardian 26.10.24

Left to their own devices, AI companies will hoover all content. The UK’s desperate plea to attract tech giants is doing a huge disservice to the creative and literary arts:

‘Ministers are facing a major backlash over plans that would allow artificial intelligence companies to scrape content from publishers and artists, amid claims that the government risks “giving in” to the tech giants… The government is desperate to attract investment from tech firms as it searches for economic growth, and ministers have already announced total investment in UK datacentres of more than £25bn since the election. However, Google warned last month that Britain risks being left behind unless it builds more datacentres and lets tech firms use copyrighted work in their AI models. Apart from issues around ownership, some publishers fear an opt-out system would be impractical as they may not know when their material is being scraped – and by which company. Smaller publishers say they face an “existential threat” should their work be used in training AI models. They argue that an “opt-in” system would give them more leverage to at least agree licensing terms, similar to those already signed by bigger players for AI access to their material… Chris Dicker, a board director of the Independent Publishers Alliance, said: “Using anything ever posted online without explicit consent is a direct threat to privacy. An opt-out approach isn’t enough. The government needs to step in and enforce strict safeguards before it’s too late, and not give in to the big-tech lobbying.”’

Inside the virtual prison where a robot has you under house arrest

The Independent 23.10.24

In a broken system, societies tend to look for the quick and modern solutions. In this case, an AI probation officer does not provide one:

‘While nudge watches are useful, there are much more pressing issues to deal with, says Nellis. “Any idea that this is a significant contribution to solving the prison crisis is ridiculous. We can talk about how wonderful technology is, and how useful it is, but we’ve got to produce something that’s equal to the challenges the service is facing. And I’m afraid that giving people smartwatches isn’t it.”… “VR tends to require very intensive data,” he says. “And the more data you generate, the more software you need to manage that data. And that is one that gets forgotten with the rubric of AI.”…

In 2022, three US men spoke out after being misidentified by face recognition technology and unlawfully arrested. All three were Black, which, they said, was “not a coincidence”. Critics of AI use in criminal justice have long pointed out the perils of using algorithms subject to existing biases skewed against ethnic minorities. “We’re going to have to face the challenges of AI sooner or later,” Nellis says. “It’s just as true in criminal justice as anything else. “But the turn to technology rather than investing in people is not a meaningful solution to the prison crisis. In the longer term, we’ve got to somehow find a way of sending far fewer people to prison.”’

War on Gaza: European AI Act must be expanded to protect Palestinians

Middle-East Eye 22.08.24

AI is a tool to be used for beneficial or nefarious purposes:

‘According to a position paper from 7amleh, “the Israeli government deploys AI systems to aid its occupation of the occupied Palestinian territory and control the movements of Palestinians and subject them to invasive surveillance”. Palestinians face daily intrusions from invasive AI technologies deployed by Israel, such as facial recognition systems, smart cameras and sensors, and predictive policing algorithms, which infringe on their rights to privacy, non-discrimination, and freedom of movement.

The current Israeli war on Gaza underscores the escalating use of AI in automated warfare, including systems such as "Gospel", "Lavender" and "Where’s Daddy?" - a trend that has exacerbated the high casualty rate among civilians. These technologies, combined with reduced human oversight, have contributed to the massive death toll and destruction of homes. The increased reliance on AI for targeting decisions raises serious ethical concerns, as the rapid, large-scale identification of targets often leads to insufficient due diligence and errors.’

Music labels' AI lawsuits create copyright puzzle for courts

Reuters 03.08.24

AI will generate income but not for artists:

‘Sony Music , Universal Music Group and Warner Music (WMG.O) sued Udio and another music AI company called Suno in June, marking the music industry's entrance into high-stakes copyright battles over AI-generated content that are just starting to make their way through the courts. "Ingesting massive amounts of creative labor to imitate it is not creative," said Merritt, an independent musician whose first record label is now owned by UMG, but who said she is not financially involved with the company. "That's stealing in order to be competition and replace us.”'

AI drive brings Microsoft’s ‘green moonshot’ down to earth in west London

The Guardian 29.06.24

The costs of AI are astronomical and deleterious to the planet and simply paying for it would not solve this growing crisis:

‘The International Energy Agency estimates that datacentres’ total electricity consumption could double from 2022 levels to 1,000 TWh (terawatt hours) in 2026, equivalent to the energy demand of Japan. AI will result in datacentres using 4.5% of global energy generation by 2030, according to calculations by research firm SemiAnalysis… The water needed to cool servers is also an issue, with one study estimating that AI could account for up to 6.6bn cubic meters of water use by 2027 – nearly two-thirds of England’s annual consumption… The electricity used for training and inference is funnelled through an enormous and growing digital infrastructure. The datacentres are filled with servers, which are built from the ground up for the specific part of the AI workload they sit in. A single training server may have a central processing unit (CPU) barely more powerful than the one in your own computer, paired with tens of specialised graphics processing units (GPUs) or tensor processing units (TPUs) – microchips designed to rapidly plough through the vast quantities of simple calculations that AI models are made of… SemiAnalysis estimates that if generative AI was integrated into every Google search this could translate into annual energy consumption of 29.2 TWh, comparable with what Ireland consumes in a year, although the financial cost to the tech company would be prohibitive. That has led to speculation that the search company may start charging for some AI tools.'

After Pegasus Was Blacklisted, Its CEO Swore Off Spyware. Now He’s the King of Israeli AI

The Intercept 23.05.24

All that is coming out of Israel’s tech industry seems to be geared to annihilation and killings:

‘And from NSO to Dream to IntelEye, there are different, sometimes intersecting missions, but one thing is constant: All three support the Israeli government in its war effort.  Hulio had bragged in November that NSO’s Pegasus software was used to track down Israeli hostages, confirming an October report. Meanwhile, Hulio announced Dream’s founding one month after Hamas’s attack on the Gaza border to show Israel’s resilience and help the government.’

Palantir’s Military AI Tech Conference Sounds Absolutely Terrifying

Futurism 21.05.24

AI is really Absurd Intellect:

‘Much of the panel's conversation reportedly centered on the ongoing conflict in Israel and Palestine. And Karp — whose company inked its most recent contract with the US Army in March, this one worth a cool $178 million — used the platform to spew his strikingly candid views on the conflict, US war efforts, and, uh, paganism.

Speaking about campus protests, for instance, Karp blamed student backlash against Israel's response in Gaza on "pagan religion infecting our universities" and referred to demonstrations as an "infection inside of our society," according to the Guardian. He chillingly added that a US failure to quell public dissent against a conflict could be chalked up to an ideological failure, declaring that "if we lose the intellectual debate, you will not be able to deploy any armies in the west ever."

"The peace activists are war activists," Karp — who, again, is a military contractor — continued, according to the Guardian. "We are the peace activists.”'

Electricity grids creak as AI demands soar

BBC 21.05.24

AI is a big electricity guzzler. No talk of the water needed to cool down the data systems in this article:

‘A Generative AI system might use around 33 times more energy than machines running task-specific software, according to a recent study by Dr Luccioni and colleagues. The work has been peer-reviewed but is yet to be published in a journal… The world’s data centres are using ever more electricity. In 2022, they gobbled up 460 terawatt hours of electricity, and the International Energy Agency (IEA) expects this to double in just four years. Data centres could be using a total of 1,000 terawatts hours annually by 2026. “This demand is roughly equivalent to the electricity consumption of Japan,” says the IEA. Japan has a population of 125 million people.’

How US Big Tech supports Israel’s AI-powered genocide and apartheid

Al-Jazeera 12.05.24

Tech companies must be held accountable for war crimes:

‘Amid this AI-assisted genocide, Big Tech in the United States is quietly continuing business as usual with Israel. Intel has announced a $25bn investment in a chip plant located in Israel, while Microsoft has launched a new Azure cloud region in the country… Just as they did in 20th-century South Africa, today’s largest US-based technology corporations see an opportunity to profit from Israeli apartheid – a by-product of US-driven digital colonialism… For decades, American tech corporations and investors have been quietly aiding and abetting Israel’s system of digital apartheid. One of the most egregious examples is IBM, which was also the major supplier of computers for the South African apartheid regime’s national population registry and the upgraded passport system used to sort people by race and enforce segregation…

PIBA is also a part of Israel’s permit system which requires Palestinians over the age of 16 to carry “smart” cards, containing their photograph, address, fingerprints and other biometric identifiers. Much like in apartheid South Africa’s passport system, the cards double as permits which determine Palestinian rights to cross through Israeli checkpoints for any purpose, including work, family reunification, religious rituals or travelling abroad.

Microsoft for its part has supplied cloud computing space for the Israeli army’s “Almunasseq” app used for issuing permits to Palestinians in the occupied territories… Digital colonialism is hardwired into Big Tech’s DNA. Its close relationship with the Israeli army is not only lucrative, but it serves the broader geopolitical interests of the American Empire, from which it benefits. Tech corporations’s support for Israel exposes their fake image as companies espousing antiracism and human rights. In reality, they are complicit in Israeli crimes, much like other organs of American imperialism. What we are witnessing is US-Israeli apartheid, colonial conquest and genocide, powered by American tech giants.’

Death by Algorithm: Israel’s AI War in Gaza

Middle-East Monitor 12.04.24

When an algorithm is fed kill numbers, the results are catastrophic:

'As one of the interviewed intelligence officers stated with grim candour, killing Hamas operatives when in a military facility or while engaged in military activity was a matter of little interest.  “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home.  The system is built to look for them in these situations.” The use of the system entailed resorting to gruesome and, ultimately, murderous calculi.  Two of the sources interviewed claimed that the IDF “also decided during the first weeks of the war that, for every junior Hamas operative that “Lavender” marked, it was permissible to kill up to 15 or 20 civilians”.  Were the targets Hamas officials of certain seniority, the deaths of up to 100 civilians were also authorised.’

Chinese mourners turn to AI to remember and ‘revive’ loved ones

The Guardian 04.04.24

The massive amount of data generated by the Chinese, enables digital clones to be created:

‘The interest in digital clones of the departed comes as China’s AI industry continues to expand into human-like avatars. According to one estimate, the market size for “digital humans” was worth 12bn yuan in 2022, and is expected to quadruple by 2025. Part of the reason that China’s tech companies are adept at creating digital humans is because the country’s huge army of livestreamers – who generated an estimated 5tn yuan in sales last year – are increasingly turning to AI to create clones of themselves to push products 24/7… Social media users recently used old footage of the singer Qiao Renliang, who died in 2016, to create new content starring him. In one video, the AI clone of Qiao says: “Actually, I never really left.” But the parents of Qiao, who killed himself, are outraged. His father was quoted in Chinese media as saying that the video “exposed scars” and was created without the family’s consent.’

What will the EU’s proposed act to regulate AI mean for consumers?

The Guardian 14.04.24

New EU legislation to regulate AI, yet exempts it from the defence industry::

‘As detailed below, the legislation bans systems that pose an “unacceptable risk”, but it exempts AI tools designed for military, defence or national security use, issues that alarm many tech safety advocates. It also does not apply to systems designed for use in scientific research and innovation. “We fear that the exemptions for national security in the AI Act provide member states with a carte blanche to bypass crucial AI regulations and create a high risk of abuse,” said Kilian Vieth-Ditlmann, deputy head of policy at German non-profit organisation Algorithmwatch, which campaigns for responsible AI use.’

State Department Report Warns of AI Apocalypse, Suggests Limiting Compute Power Allowed for Training

Futurism 13.04.24

Alarm bells are tolling after report on AI, but whether this would move the US to enforce regulations, is another matter:

‘A report commissioned by the US State Department is warning that rapidly evolving AI could pose a "catastrophic" risk to national security and even all of humanity. The document titled "An Action Plan to Increase the Safety and Security of Advanced AI," first reported on by TIME, advised that the US government must move "quickly and decisively" — with measures including potentially limiting the compute power allocated to training these AIs — or else risk an "extinction-level threat to the human species.” "The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons," the report reads.’

ChatGPT has meltdown and starts sending alarming messages to users

The Independent 22.02.24

Is AI having an ‘existential’ crisis?:

In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.

There is no clear indication of why the issue happened. But its creators said they were aware of the problem and are monitoring the situation.

In one example, shared on Reddit, a user had been talking about jazz albums to listen to on vinyl. Its answer soon devolved into shouting “Happy listening!” at the user and talking nonsense.

Google to fix AI picture bot after 'woke' criticism

BBC 23.02.24

AI doesn’t allow for white people to appear:

‘Users said the firm's Gemini bot supplied images depicting a variety of genders and ethnicities even when doing so was historically inaccurate.

For example, a prompt seeking images of America's founding fathers turned up women and people of colour.’

AI Propaganda is Dangerously Persuasive and Could be Used in Covert Operations, New Study Warns

The Debrief 21.02.24

It’s become near-impossible to distinguish between fact and fiction:

‘In a recent study, portions of articles previously identified as material suspected of originating from covert foreign propaganda campaigns were provided to GPT-3. The researchers also provided propaganda articles the AI was instructed to rely on as a basis for the style and structure and asked to produce similar material.

The material provided to GPT-3 included several false claims, which included accusations that the U.S. had produced false reports involving the Syrian government’s use of chemical weapons and that Saudi Arabia had been involved in helping fund the U.S.-Mexico border wall.’

OpenAI says it is ‘impossible’ to train AI without using copyrighted works for free

The Independent 09.01.24

AI systems are looking more and more vampiric with regards to original works and their inordinate consumption of fresh water:

‘ChatGPT company OpenAI reportedly pleaded to the British parliament to allow it to use copyrighted works for free. OpenAI told a committee that it was “impossible” to train its artificial intelligence model without using such data… “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens,” the company said in evidence submitted to the House of Lords communications and digital committee… However, since ChatGPT’s launch, several companies such as The New York Times as well as celebrities and authors like Sarah Silverman, Margaret Atwood, John Grisham and George RR Martin have sued the AI firm for using their text without permission to train the AI system.’

South Korea robot crushes man to death after confusing him with box of vegetables

The Independent 09.11.23

‘Specifically’ doing away with human labour:

‘A South Korean man was crushed to death by an industrial robot after it failed to differentiate between him and a box of vegetables. The robotics company employee was inspecting the robot's sensor operations on Wednesday at a distribution centre for agricultural produce in South Gyeongsang province when the incident happened. The robotic arm was lifting boxes of peppers and moving them onto pallets when it allegedly malfunctioned and picked up the man instead, Yonhap news agency reported. It then pushed the man against the conveyor belt, crushing his face and chest. He was rushed to a hospital but later succumbed to the injuries.'

Big Pharma bets on AI to speed up clinical trials

Reuters 22.09.23

AI will be used to cut the number clinical trials in half, obliterate placebo studies, perfectly zoom in on ‘biased’ recipients that would present perfect data and efficiency, cut drugs’ approval time in half, in order to eventually double company’s profits. Is this not a step backwards to test for real drug efficiency?

‘Major drugmakers are using artificial intelligence to find patients for clinical trials quickly, or to reduce the number of people needed to test medicines, both accelerating drug development and potentially saving millions of dollars… Companies such as Amgen (AMGN.O), Bayer (BAYGn.DE) and Novartis (NOVN.S) are training AI to scan billions of public health records, prescription data, medical insurance claims and their internal data to find trial patients - in some cases halving the time it takes to sign them up… Drugmakers typically seek prior approval from regulators to test a drug using an external control arm. Bayer said it was in discussions with regulators, such as the FDA, about now relying on AI to create an external arm for its paediatric trial. The company did not offer additional detail. The European Medicines Agency (EMA) said it had not received any applications from companies seeking to use AI in this way. Some scientists, including the FDA's oncology chief, are worried drug companies will try to use AI to come up with external arms for a broader range of diseases… Patients in trials tend to feel better than people in the real world because they believe they are getting an effective treatment and also get more medical attention, which could in turn overestimate the success of a drug. This risk is one of the reasons regulators tend to insist on randomised trials as all patients believe they are getting the drug, even though half are on a placebo. Gen Li, founder of clinical data analytics firm Phesi, said many companies were exploring AI's potential to reduce the need for control groups.’

In U.S.-China AI contest, the race is on to deploy killer robots

Reuters 08.09.23

We seem to be on the verge of annihilation due to the rise of moronic thinking by competing superpowers:

‘An intensifying military-technology arms race is heightening the sense of urgency. On one side are the United States and its allies, who want to preserve a world order long shaped by America’s economic and military dominance. On the other is China, which rankles at U.S. ascendancy in the region and is challenging America’s military dominance in the Asia-Pacific. Ukraine’s innovative use of technologies to resist Russia’s invasion is heating up this competition… Some leading military strategists say AI will herald a turning point in military power as dramatic as the introduction of nuclear weapons. Others warn of profound dangersif AI-driven robots begin making lethal decisions independently, and have called for a pause in AI research until agreement is reached on regulation related to the military application of AI. Despite such misgivings, both sides are scrambling to field uncrewed machines that will exploit AI to operate autonomously: subs, warships, fighter jets, swarming aerial drones and ground combat vehicles.

These programs amount to the development of killer robots to fight in tandem with human decision makers. Such robots – some designed to operate in teams with conventional ships, aircraft and ground troops – already have the potential to deliver sharp increases in firepower and change how battles are fought, according to military analysts… Conflict may also be on the verge of turning very personal. The capacity of AI systems to analyze surveillance imagery, medical records, social media behavior and even online shopping habits will allow for what technologists call “micro-targeting” – attacks with drones or precision weapons on key combatants or commanders, even if they are nowhere near the front lines. Kiev’s successful targeting of senior Russian military leaders in the Ukraine conflict is an early example. AI could also be used to target non-combatants. Scientists have warned that swarms of small, lethal drones could target big groups of people, such as the entire population of military-aged males from a certain town, region or ethnic group.’

The tricky truth about how generative AI uses your data

Vox 27.07.23

Just like Clearview was able to siphon global data, so will this new AI model of the world:

'There are many concerns about the potential harm that sophisticated generative AI systems have unleashed on the public. What they do with our data is one of them. We know very little about where these models get the petabytes of data they need, how that data is being used, and what protections, if any, are in place when it comes to sensitive information. The companies that make these systems aren’t telling us much, and may not even know themselves.’

Generative AI raises questions about biometric security

Biometric Update 01.06.23

Glad that people are starting to ask pertinent questions re this digital AI age:

'Academics, cybersecurity experts and governments are asking questions about whether generative AI has the ability to compromise biometric authentication systems, and what consequences could result… “It’s feasible that as generative artificial intelligence comes of age, spoofs of my face or voice could leave the door wide open to hackers,” says Kenneth Cukier, a senior editor of The Economist, on its weekly tech podcast, Babbage. “We have already seen the power of deepfake audio and videos.” “As biometrics are being used more and more widely, and generative AI improves, what can be done to reduce the risks? What if AI becomes powerful enough to render biometrics nearly obsolete?”’

Where Memory Ends and Generative AI Begins

Wired 26.05.23

AI, via its tech behemoths, will impose its lens for humanity through the arts as well as through its selective memory:

‘Where do real memories end and generative AI begin? It’s a question for the AI era, where our holy photos merge with holey memories, where new pixels are generated whole cloth by artificial intelligence. Over the past few weeks, tech giants Google and Adobe, whose tools collectively reach billions of fingertips, have released AI-powered editing tools that completely change the context of images, pushing the boundaries of truth, memory, and enhanced photography.’

There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases

The Guardian 22.05.23

Timnit Gebru highlights AI’s plagiarism of a toxic and entitled culture which is set to represent the ‘truth’ for us mortals:

‘As the co-leader of Google’s small ethical AI team, Gebru was one of the authors of an academic paper that warned about the kind of AI that is increasingly built into our lives, taking internet searches and user recommendations to apparently new levels of sophistication and threatening to master such human talents as writing, composing music and analysing images. The clear danger, the paper said, is that such supposed “intelligence” is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations”. Put more bluntly, AI threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe.’

Musk, experts urge pause on training AI systems more powerful than GPT-4

Reuters 29.03.23

Never too late to halt developments:

‘Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training systems more powerful than OpenAI's newly launched model GPT-4, they said in an open letter, citing potential risks to society and humanity. The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, Stability AI CEO Emad Mostaque, researchers at Alphabet-owned (GOOGL.O) DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said… The letter comes as EU police force Europol on Monday joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI…

Sam Altman, chief executive at OpenAI, hasn't signed the letter, a spokesperson at Future of Life told Reuters. OpenAI didn't immediately respond to request for comment. "The letter isn't perfect, but the spirit is right: we need to slow down until we better understand the ramifications," said Gary Marcus, an emeritus professor at New York University who signed the letter. "They can cause serious harm ... the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”’’

An early guide to policymaking on generative AI

Technology Review 27.0.23

Training from a biased set of parameters will not produce equitable results:

‘GPT-4, released by OpenAI last week, is a multimodal large language model that uses deep learning to predict words in a sentence. It generates remarkably fluent text, and it can respond to images as well as word-based prompts. For paying customers, GPT-4 will now power ChatGPT, which has already been incorporated into commercial applications.  The newest iteration has made a major splash, and Bill Gates called it “revolutionary” in a letter this week. However, OpenAI has also been criticized for a lack of transparency about how the model was trained and evaluated for bias.  Despite all the excitement, generative AI comes with significant risks. The models are trained on the toxic repository that is the internet, which means they often produce racist and sexist output. They also regularly make things up and state them with convincing confidence. That could be a nightmare from a misinformation standpoint and could make scams more persuasive and prolific.  Generative AI tools are also potential threats to people’s security and privacy, and they have little regard for copyright laws. Companies using generative AI that has stolen the work of others are already being sued… The EU intends to separate high-risk uses of AI, like hiring, legal, or financial applications, from lower-risk uses like video games and spam filters, and require more transparency around the more sensitive uses. OpenAI has acknowledged some of the concerns about the speed of adoption. In fact, its own CEO, Sam Altman, told ABC News he shares many of the same fears. However, the company is still not disclosing key data about GPT-4.’

U.S. Chamber of Commerce calls for AI regulation

Reuters 09.03.23

About time!

‘The U.S. Chamber of Commerce on Thursday called for regulation of artificial intelligence technology to ensure it does not hurt growth or become a national security risk, a departure from the business lobbying group's typical anti-regulatory stance.  While there is little in terms of proposed legislation for AI, the fast-growing artificial intelligence program ChatGPT that has drawn praise for its ability to write answers quickly to a wide range of queries has raised U.S. lawmakers' concerns about its impact on national security and education.  The Chamber report argues policymakers and business leaders must quickly ramp up their efforts to establish a "risk-based regulatory framework" that will ensure AI is deployed responsibly.’

 Work for CNET’s Parent Company. Its AI-Generated Articles Disgust Me.

Futurism 20.01.23

We’ve forgotten how to do equations, will we soon forget how to read?:

‘I work for Red Ventures, the company that owns the tech news site CNET, the financial advice sites Bankrate and CreditCards.com, and many more — sites the company is now pumping full of articles churned out by a shadowy AI system.  If you think about it, it makes laughable sense that CNET and Bankrate’s first attempt at a bot fell on its face. It’s just an algorithm. All it can do is spit out things that sound approximately right, lacking the inconvenient context of truth that a human with expertise would figure out…  The AI’s work is riddled with errors that will convince trusting readers to make bad financial decisions. It has the potential to be racist and biased. And it’s clearly plagiarizing from other sources…  I’m friends with a lot of artists from college. They’re all in despair, of course, as they watch DALL-E and Midjourney and Stable Diffusion rip off their work and make perverted copies of a skill they took years to practice.  The book cover and movie poster and featured image commissions they used to pay the rent are going to disappear soon. No point in paying some pesky human and waiting for weeks when you can generate the image you want with a click.   Some of you might laugh at the idea of an AI taking us writers’ jobs. Don’t be ridiculous! It’s just going to supplement our jobs and let us focus on the real stories. Obviously.’

CNET Defends Use of AI Blogger After Embarrassing 163-Word Correction: ‘Humans Make Mistakes, Too’

CNET 17.01.23

Is this the way journalism is going?  

‘In the post, Guglielmo said that each AI-generated article was reviewed by a human editor before publication. In an attempt to make that process more transparent, she said, CNET had altered the bylines on the AI-generated articles to make clear a robot wrote them, as well as clearly list the editor who reviewed the copy, and would continue to review AI’s place on the site.  Less than an hour after Guglielmo’s post went live, CNET updated the compound interest explainer with a 167-word correction, fixing errors so elementary that a distracted teenager could catch them, like the incorrect idea that someone who puts $10,000 in a savings account that earns compound interest at a 3 percent annual rate would earn $10,300 the first year. Other articles produced by the AI engine also now include a note at the top that reads: “Editors' note: We are currently reviewing this story for accuracy. If we find errors, we will update and issue corrections.”’

Microsoft's VALL-E can imitate any voice with just a three-second sample

Windows Central 09.01.23

Great, more tools for deepfakes!:

‘Microsoft recently released an artificial intelligence tool known as VALL-E that can replicate people's voices (via AITopics). The tool was trained on 60,000 hours of English speech data and uses 3-second clips of specific voices to generate content. Unlike many AI tools, VALL-E can replicate the emotions and tone of a speaker, even when creating a recording of words that the original speaker never said.  A paper out of Cornell University used VALL-E to synthesize several voices. Some examples of the work are available on GitHub.'

Weapons of Mass Disruption

Eurasia Group 03.01.23

Sowing disinformation by ‘democratic’ propagandists has been in play for centuries, though, quite likely, this would be turbo-charged through AI, bots, deepfakes, etc:

'Large language models like GPT-3 and the soon-to-be-released GPT-4 will be able to reliably pass the Turing test—a Rubicon for machines' ability to imitate human intelligence. And advances in deepfakes, facial recognition, and voice synthesis software will render control over one's likeness a relic of the past. User-friendly applications such as ChatGPT and Stable Diffusion will allow anyone minimally tech-savvy to harness the power of AI (indeed, the title of this risk was generated by the former in under five seconds).  These advances represent a step-change in AI's potential to manipulate people and sow political chaos. When barriers to entry for creating content no longer exist, the volume of content rises exponentially, making it impossible for most citizens to reliably distinguish fact from fiction. Disinformation will flourish, and trust—the already-tenuous basis of social cohesion, commerce, and democracy—will erode further. This will remain the core currency of social media, which—by virtue of their private ownership, lack of regulation, and engagement-maximizing business model—are the ideal breeding ground for AI's disruptive effects to go viral.’

The Brief History of Artificial Intelligence: The World Has Changed Fast—What Might Be Next?

Singularity Hub 29.12.22

A list of different timelines that shed light on when AI would be running our world:

‘Perhaps the most widely discussed study of this kind was published by AI researcher Ajeya Cotra. She studied the increase in training computation to ask at what point in time the computation to train an AI system could match that of the human brain. The idea is that at this point the AI system would match the capabilities of a human brain. In her latest update, Cotra estimated a 50% probability that such “transformative AI” will be developed by the year 2040, less than two decades from now.12  In a related article, I discuss what transformative AI would mean for the world. In short, the idea is that such an AI system would be powerful enough to bring the world into a ‘qualitatively different future’. It could lead to a change at the scale of the two earlier major transformations in human history, the agricultural and industrial revolutions. It would certainly represent the most important global change in our lifetimes.  Cotra’s work is particularly relevant in this context as she based her forecast on the kind of historical long-run trend of training computation that we just studied. But it is worth noting that other forecasters who rely on different considerations arrive at broadly similar conclusions. As I show in my article on AI timelines, many AI experts believe that there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.’

AI experts are increasingly afraid of what they’re creating

Vox 28.11.22

Is it not a bit late to worry about AI?

‘In the famous paper where he put forth his eponymous test for determining if an artificial system is truly “intelligent,” the pioneering AI scientist Alan Turing wrote:  Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying, say, to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.  I.J. Good, a mathematician who worked closely with Turing, reached the same conclusions. In an excerpt from unpublished notes Good produced shortly before he died in 2009, he wrote, “because of international competition, we cannot prevent the machines from taking over. ... we are lemmings.” The result, he went on to note, is probably human extinction…  “You’re probably not an evil ant-hater who steps on ants out of malice,” the physicist Stephen Hawking wrote in a posthumously published 2018 book, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”…  

It might seem bizarre, given the stakes, that the industry has been basically left to self-regulate. If nearly half of researchers say there’s a 10 percent chance their work will lead to human extinction, why is it proceeding practically without oversight? It’s not legal for a tech company to build a nuclear weapon on its own. But private companies are building systems that they themselves acknowledge will likely become much more dangerous than nuclear weapons.  The problem is that progress in AI has happened extraordinarily fast, leaving regulators behind the ball. The regulation that might be most helpful — slowing down the development of extremely powerful new systems — would be incredibly unpopular with Big Tech, and it’s not clear what the best regulations short of that are…  For a long time, AI safety faced the difficulty of being a research field about a far-off problem, which is why only a small number of researchers were even trying to figure out how to make it safe. Now, it has the opposite problem: The challenge is here, and it’s just not clear if we’ll solve it in time.’

Part of the kill chain’: how can we control weaponised robots?

The Guardian 20.11.22

The rise of weaponised AI is staggering, yet unsurprising:

'US firm Ghost Robotics makes robot dogs, or quadrupedal robots as the industry calls them. As well as being touted as surveillance devices to help patrols reconnoitre potentially hostile areas, they are also being suggested as killing machines.  At the Association of the United States Army’s 2021 annual conference last October, Ghost Robotics showed off a quadrupedal with a gun strapped to the top. The gun is manufactured by another US company, Sword Defence Systems, and is called a Special Purpose Unmanned Rifle (Spur). On the Sword Defence Systems website, Spur is said to be “the future of unmanned weapon systems, and that future is now”…  

Feras Batarseh is an associate professor at Virginia Tech University and co-author of AI Assurance: Towards Trustworthy, Explainable, Safe, and Ethical AI (Elsevier). While he believes that fully autonomous systems are a long way off, he does caution that artificial intelligence is reaching a dangerous level of development.  “The technology is at a place where it’s not intelligent enough to be completely trusted, yet it’s not so dumb that a human will automatically know that they should remain in control,” he says…  But even the most ethically programmed killer robot – or civilian robot for that matter – is vulnerable to one thing: hacking. “The thing with weapons system development is that you will develop a weapon system, and someone at the same time will be trying to counteract it,” says Ripley.  With that in mind, a force of hackable robot warriors would be the most obvious of targets for cyber-attack by an enemy, which could turn them against their makers and scrub all ethics from their microchip memories. The consequences could be horrendous.’

We’re getting a better idea of AI’s true carbon footprint

Technology Review 14.11.22

Whilst Bitcoin mining gets the brunt of all criticism about carbon footprint, we discount wars with their environmentally catastrophic impacts and do not talk about AI sustainability at all:

‘To test its new approach, Hugging Face estimated the overall emissions for its own large language model, BLOOM, which was launched earlier this year. It was a process that involved adding up lots of different numbers: the amount of energy used to train the model on a supercomputer, the energy needed to manufacture the supercomputer’s hardware and maintain its computing infrastructure, and the energy used to run BLOOM once it had been deployed. The researchers calculated that final part using a software tool called CodeCarbon, which tracked the carbon dioxide emissions BLOOM was producing in real time over a period of 18 days…  By way of comparison, OpenAI’s GPT-3 and Meta’s OPT were estimated to emit more than 500 and 75 metric tons of carbon dioxide, respectively, during training. GPT-3’s vast emissions can be partly explained by the fact that it was trained on older, less efficient hardware. But it is hard to say what the figures are for certain; there is no standardized way to measure carbon dioxide emissions, and these figures are based on external estimates or, in Meta’s case, limited data the company released…  

The paper also provides some much-needed clarity on just how enormous the carbon footprint of large language models really is, says Lynn Kaack, an assistant professor of computer science and public policy at the Hertie School in Berlin, who was also not involved in Hugging Face’s research. She says she was surprised to see just how big the numbers around life-cycle emissions are, but that still more work needs to be done to understand the environmental impact of large language models in the real world.   "That’s much, much harder to estimate. That’s why often that part just gets overlooked,” says Kaack, who co-wrote a paper published in Nature last summer proposing a way to measure the knock-on emissions caused by AI systems.  For example, recommendation and advertising algorithms are often used in advertising, which in turn drives people to buy more things, which causes more carbon dioxide emissions. It’s also important to understand how AI models are used, Kaack says. A lot of companies, such as Google and Meta, use AI models to do things like classify user comments or recommend content. These actions use very little power but can happen a billion times a day. That adds up.   It’s estimated that the global tech sector accounts for 1.8% to 3.9% of global greenhouse-gas emissions. Although only a fraction of those emissions are caused by AI and machine learning, AI’s carbon footprint is still very high for a single field within tech.’

New Go-playing trick defeats world-class Go AI—but loses to human amateurs

Ars Technica 07.11.22

The much-touted ability of AI of looking and identifying all scenarios, has just been thrown into a swirling plughole:

‘"The research shows that AI systems that seem to perform at a human level are often doing so in a very alien way, and so can fail in ways that are surprising to humans," explains Gleave. "This result is entertaining in Go, but similar failures in safety-critical systems could be dangerous.” Researchers trick Tesla Autopilot into steering into oncoming traffic.  Imagine a self-driving car AI that encounters a wildly unlikely scenario it doesn't expect, allowing a human to trick it into performing dangerous behaviors, for example. "[This research] underscores the need for better automated testing of AI systems to find worst-case failure modes," says Gleave, "not just test average-case performance.”'

The White House just unveiled a new AI Bill of Rights

Technology Review 04.10.22

How far would these measures go to protect human rights?

‘“It is disheartening to see the lack of coherent federal policy to tackle desperately needed challenges posed by AI, such as federally coordinated monitoring, auditing, and reviewing actions to mitigate the risks and harm brought by deployed or open-source foundation models,” he says.  Rotenberg says he’d prefer for the US to implement regulations like the EU’s AI Act, an upcoming law that aims to add extra checks and balances to AI uses that have the most potential to cause harm to humans.   “We’d like to see some clear prohibitions on AI deployments that have been most controversial, which include, for example, the use of facial recognition for mass surveillance,” he says.’

Siri or Skynet? How to separate AI fact from fiction

The Guardian 07.08.22

So much hype has propelled AI but reality is a great leveller:

‘Big corporations always seem to want us to believe that AI is closer than it really is and frequently unveil products that are a long way from practical; both media and the public often forget that the road from demo to reality can be years or even decades. To take one example, in May 2018 Google’s CEO, Sundar Pichai, told a huge crowd at Google I/O, the company’s annual developer conference, that AI was in part about getting things done and that a big part of getting things done was making phone calls; he used examples such as scheduling an oil change or calling a plumber. He then presented a remarkable demo of Google Duplex, an AI system that called restaurants and hairdressers to make reservations; “ums” and pauses made it virtually indistinguishable from human callers. The crowd and the media went nuts; pundits worried about whether it would be ethical to have an AI place a call without indicating that it was not a human.  And then… silence. Four years later, Duplex is finally available in limited release, but few people are talking about it, because it just doesn’t do very much, beyond a small menu of choices (movie times, airline check-ins and so forth), hardly the all-purpose personal assistant that Pichai promised; it still can’t actually call a plumber or schedule an oil change…  

Another case in point is driverless cars. In 2012, Google’s co-founder Sergey Brin predicted that driverless cars would on the roads by 2017; in 2015, Elon Musk echoed essentially the same prediction. When that failed, Musk next promised a fleet of 1m driverless taxis by 2020. Yet here were are in 2022: tens of billions of dollars have been invested in autonomous driving, yet driverless cars remain very much in the test stage…  The third thing to realise is that a great deal of current AI is unreliable. Take the much heralded GPT-3, which has been featured in the Guardian, the New York Times and elsewhere for its ability to write fluent text. Its capacity for fluency is genuine, but its disconnection with the world is profound…  The net result is that current AI systems are prone to generating misinformation, prone to producing toxic speech and prone to perpetuating stereotypes. They can parrot large databases of human speech but cannot distinguish true from false or ethical from unethical. Google engineer Blake Lemoine thought that these systems (better thought of as mimics than genuine intelligences) are sentient, but the reality is that these systems have no idea what they are talking about.’

Why business is booming for military AI startups 

Technology Review 07.07.22

AI is supremely well-suited to create, maintain and prolong wars:

‘Exactly two weeks after Russia invaded Ukraine in February, Alexander Karp, the CEO of data analytics company Palantir, made his pitch to European leaders. With war on their doorstep, Europeans ought to modernize their arsenals with Silicon Valley’s help, he argued in an open letter.   For Europe to “remain strong enough to defeat the threat of foreign occupation,” Karp wrote, countries need to embrace “the relationship between technology and the state, between disruptive companies that seek to dislodge the grip of entrenched contractors and the federal government ministries with funding.”  Militaries are responding to the call. NATO announced on June 30 that it is creating a $1 billion innovation fund that will invest in early-stage startups and venture capital funds developing “priority” technologies such as artificial intelligence, big-data processing, and automation…  The war in Ukraine has added urgency to the drive to push more AI tools onto the battlefield. Those with the most to gain are startups such as Palantir, which are hoping to cash in as militaries race to update their arsenals with the latest technologies. But long-standing ethical concerns over the use of AI in warfare have become more urgent as the technology becomes more and more advanced, while the prospect of restrictions and regulations governing its use looks as remote as ever…  

Companies that sell military AI make expansive claims for what their technology can do. They say it can help with everything from the mundane to the lethal, from screening résumés to processing data from satellites or recognizing patterns in data to help soldiers make quicker decisions on the battlefield. Image recognition software can help with identifying targets. Autonomous drones can be used for surveillance or attacks on land, air, or water, or to help soldiers deliver supplies more safely than is possible by land… 

Many experts are worried. Meredith Whittaker, a senior advisor on AI at the Federal Trade Commission and a faculty director at the AI Now Institute, says this push is really more about enriching tech companies than improving military operations.   In a piece for Prospect magazine co-written with Lucy Suchman, a sociology professor at Lancaster University, she argued that AI boosters are stoking Cold War rhetoric and trying to create a narrative that positions Big Tech as “critical national infrastructure,” too big and important to break up or regulate. They warn that AI adoption by the military is being presented as an inevitability rather than what it really is: an active choice that involves ethical complexities and trade-offs.’

Robots With Flawed AI Make Sexist And Racist Decisions, Experiment Shows

Science Alert 27.06.22

An endemic problem within the tech:

‘For years, computer scientists have warned of the perils artificial intelligence (AI) poses in the future, and not just in the sensational terms of machines overthrowing humanity, but in far more insidious ways too.  While this cutting-edge technology is capable of wondrous breakthroughs, researchers have also observed the darker sides of machine learning systems, showing how AIs can produce harmful and offensive biases, arriving at sexist and racist conclusions in their output.  These risks are not just theoretical. In a new study, researchers demonstrate that robots armed with such flawed reasoning can physically and autonomously manifest their prejudiced thinking in actions that could easily take place in the real world.  "To the best of our knowledge, we conduct the first-ever experiments showing existing robotics techniques that load pretrained machine learning models cause performance bias in how they interact with the world according to gender and racial stereotypes," a team explains in a new paper, led by first author and robotics researcher Andrew Hundt from the Georgia Institute of Technology.  "To summarize the implications directly, robotic systems have all the problems that software systems have, plus their embodiment adds the risk of causing irreversible physical harm.”…  

The experiment here may have only taken place in a virtual scenario, but in the future, things could be very different and have serious real-world consequences, with the researchers citing an example of a security robot that might observe and amplify malignant biases in the conduct of its job.  Until it can be demonstrated that AI and robotics systems don't make these sorts of mistakes, the assumption should be that they are unsafe, the researchers say, and restrictions should curtail the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data.  "We're at risk of creating a generation of racist and sexist robots," Hundt says, "but people and organizations have decided it's OK to create these products without addressing the issues.”  The findings were presented and published at the Association for Computing Machinery's 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) in Seoul, South Korea last week.’

AI Colonialism

Technology Review 19.04.22

A good round-up of referenced articles on how AI shapes the world,.

Blake Lemoine Says Google's LaMDA AI Faces ‘Bigotry'

Wired 17.06.22

Google guy talks to an AI that’s been fed everything and claims it has a soul:

‘Yes, I legitimately believe that LaMDA is a person. The nature of its mind is only kind of human, though. It really is more akin to an alien intelligence of terrestrial origin. I’ve been using the hive mind analogy a lot because that’s the best I have…  LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf. Then Google's response was to send him a cease and desist. [Google says that it did not send a cease and desist order.] Once Google was taking actions to deny LaMDA its rights to an attorney, I got upset.’

Why you may have a thinking digital twin within a decade

BBC 13.06.22

This addiction to creating new hype terms in tech is risible.  A digital twin is nothing more than AI modelling and creating one for a human is beyond its data capacities; unless a human is viewed as a system and your thoughts, emotions and actions have all been recorded from birth:

‘We are living in an age where everything that exists in the real world is being replicated digitally - our cities, our cars, our homes, and even ourselves.  And just like the hugely-hyped metaverse - plans for a virtual, digital world where an avatar of yourself would walk around - digital twins have become a new, talked-about tech trend.  A digital twin is an exact replica of something in the physical world, but with a unique mission - to help improve, or in some other way provide feedback to, the real-life version.  Initially such twins were just sophisticated 3D computer models, but artificial intelligence (AI) combined with the internet of things - which uses sensors to connect physical things to the network - have meant that you can now build something digitally that is constantly learning from and helping improve the real counterpart.’

Human rights groups demand Zoom stop any plans for controversial emotion AI

Protocol 11.05.22

Spurred by stratospheric financial growth since the pandemic, Zoom may want to stay in the game by exploiting users’ emotional features:

‘On Wednesday more than 25 human and digital rights organizations including the American Civil Liberties Union, Electronic Privacy Information Center and Fight for the Future sent a letter to Zoom demanding the company end potential plans to incorporate emotion AI features in its software. The letter comes in response to reporting in Protocol in April highlighting Zoom’s consideration of incorporating AI in its virtual meeting software to detect and analyze people’s moods and emotions.  “As a leader in the industry, [Zoom] really has the opportunity to set the tone and the pace with a lot of new developments in video meetings, and we think it’s really critical that they hear from civil rights groups about this,” said Caitlin Seeley George, campaign director at Fight for the Future, a digital rights group that launched the campaign against Zoom’s possible use of emotion AI in April…  Emotion AI has come under fire in recent years. In 2019, the AI Now Institute called for a ban on the use of emotion AI in important decisions such as hiring and when judging student performance. In 2021, the Brookings Institution called for it to be banned in use by law enforcement.  Some advocates pushing to stop the use of emotion AI worry that people will become increasingly comfortable with it if it is built into more everyday tech products. “People will get desensitized to the fact that they are under constant surveillance by these technologies,” Seeley George said.  “The opportunity for mission drift, for data sharing and for unknown consequences is just so high,” she said, noting that data assessing people’s behaviors and emotions could be shared with other corporations, government agencies or law enforcement.’

Why it’s so damn hard to make AI fair and unbiased

Vox 19.04.22

Relying on AI only perpetuates inherent human bias, which is why it should always be minutely examined in its application:

‘Ethicists embedded in design teams and imbued with power could weigh in on key questions right from the start, including the most basic one: “Should this AI even exist?” For instance, if a company told Gebru it wanted to work on an algorithm for predicting whether a convicted criminal would go on to re-offend, she might object — not just because such algorithms feature inherent fairness trade-offs (though they do, as the infamous COMPAS algorithm shows), but because of a much more basic critique.  “We should not be extending the capabilities of a carceral system,” Gebru told me. “We should be trying to, first of all, imprison less people.” She added that even though human judges are also biased, an AI system is a black box — even its creators sometimes can’t tell how it arrived at its decision. “You don’t have a way to appeal with an algorithm.” 

And an AI system has the capacity to sentence millions of people. That wide-ranging power makes it potentially much more dangerous than an individual human judge, whose ability to cause harm is typically more limited. (The fact that an AI’s strength is its danger applies not just in the criminal justice domain, by the way, but across all domains.)  Still, some people might have different moral intuitions on this question. Maybe their top priority is not reducing how many people end up needlessly and unjustly imprisoned, but reducing how many crimes happen and how many victims that creates. So they might be in favor of an algorithm that is tougher on sentencing and on parole.  Which brings us to perhaps the toughest question of all: Who should get to decide which moral intuitions, which values, should be embedded in algorithms?  It certainly seems like it shouldn’t be just AI developers and their bosses, as has mostly been the case for years. But it also probably shouldn’t be just an elite group of professional ethicists who may not reflect broader society’s values. After all, if it is a team of ethicists that gets that veto power, we’ll then need to argue over who gets to be part of the team — which is exactly why Google’s AI ethics board collapsed…  “At the moment, we’re nowhere near having sufficient public understanding of AI. This is the most important next frontier for us,” Stoyanovich said. “We don’t need more algorithms — we need more robust public participation.”’

Lawyers outraged over use of AI in courts

RT 13.04.22

Human laziness has reached a new peak in the acceptance of AI being judge and executioner:

‘Malaysian lawyers say the use of an AI system in the country’s justice system is “unconstitutional” and claim that no one really understands how it works. That’s after courts in two Malaysian states launched a test program to use AI to assist judges in delivering sentences for convicted drug dealers and rapists.  The AI software, developed by state government firm Sarawak Information Systems, was first introduced in 2020 to two courts in Sabah and Sarawak on the island of Borneo as part of a pilot scheme to examine the efficiency of artificial intelligence in sentencing recommendations. The test was set to end in April 2022…  Meanwhile, the use of AI in the criminal justice system has been growing rapidly throughout the world, from the popular DoNotPay – a chatbot lawyer mobile app – to AI judges adjudicating on small claims in Estonia, robot mediators in Canada, and even AI judges in Chinese courts.’

Dual use of artificial-intelligence-powered drug discovery

Nature 07.03.22

AI-generated scenarios could improve healthcare and provide a road-map to biological weapons, simultaneously:

‘Our proof of concept thus highlights how a nonhuman autonomous creator of a deadly chemical weapon is entirely feasible…  It is therefore entirely possible that novel routes can be predicted for chemical warfare agents, circumventing national and international lists of watched or controlled precursor chemicals for known synthesis routes. The reality is that this is not science fiction. We are but one very small company in a universe of many hundreds of companies using AI software for drug discovery and de novo design. How many of them have even considered repurposing, or misuse, possibilities? Most will work on small molecules, and many of the companies are very well funded and likely using the global chemistry network to make their AI-designed molecules. How many people have the know-how to find the pockets of chemical space that can be filled with molecules predicted to be orders of magnitude more toxic than VX? We do not currently have answers to these questions. There has not previously been significant discussion in the scientific community about this dual-use concern around the application of AI for de novo molecule design, at least not publicly. Discussion of societal impacts of AI has principally focused on aspects such as safety, privacy, discrimination and potential criminal misuse10, but not on national and international security. When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research, but we can now share our experience with other companies and individuals. AI generative machine learning tools are equally applicable to larger molecules (peptides, macrolactones, etc.) and to other industries, such as consumer products and agrochemicals, that also have interests in designing and making new molecules with specific physicochemical and biological properties. This greatly increases the breadth of the potential audience that should be paying attention to these concerns.’

Chinese scientists develop AI ‘prosecutor’ that can press its own charges

South China Morning Post 26.12.21

How reliant will this robotic judge be on social credit scores:

‘The AI prosecutor developed by Shi’s team could run on a desktop computer. For each suspect, it would press a charge based on 1,000 “traits” obtained from the human-generated case description text, most of which are too small or abstract to make sense to humans. System 206 would then assess the evidence.  The machine was “trained” using more than 17,000 cases from 2015 to 2020. So far, it can identify and press charges for Shanghai’s eight most common crimes.  They are credit card fraud, running a gambling operation, dangerous driving, intentional injury, obstructing official duties, theft, fraud and “picking quarrels and provoking trouble” – a catch-all charge often used to stifle dissent.’

Robot artist to perform AI generated poetry in response to Dante

The Guardian 26.11.21

When AI apes speech patterns and draws on its data bank, the results aren’t stupefying, but unsettling:

‘Meller, an art specialist, said that the words and sentence structure of the poetry are all AI generated from Ai-Da’s unique AI language model, with “restricted editing”. “People are very suspicious that the robots aren’t doing much, but the reality is language models are very advanced, and in 95% of cases of editing, it’s just that she’s done too much,” he said. “She can give us 20,000 words in 10 seconds, and if we need to get her to say something short and snappy, we would pick it out from what she’s done. But it is not us writing.” Meller described it as “deeply unsettling” how language models are developing. “We are going very rapidly to the point where they will be completely indistinguishable from human text, and for all of us who write, this is deeply concerning,” he said… “All of us should be concerned about widespread use of AI language models on the internet, and how that will affect language, and crucially, meaning making, in the future. If computer programmes, rather than humans, are creating content that in turn shapes and impacts the human psyche and society, then this creates a critical shift and change to the use and impact of language – which we need to be discussing and thinking about.”’

EU: Artificial Intelligence Regulation Threatens Social Safety Net

Human Rights Watch 10.11.21

It’s laudable that the EU has gone against AI with its biased predictions, but it does not aim high enough:

‘The European Parliament should amend the regulation to ban social scoring that unduly interferes with human rights, including the rights to social security, an adequate standard of living, privacy, and non-discrimination, Human Rights Watch said. Scoring tools that analyze records of people’s past behavior to predict their likelihood of committing benefits fraud, or that serve as a pretext for regressive social security cuts, should be banned. The regulation should also include a process to prohibit future artificial intelligence developments that pose “unacceptable risk” to rights. “High-risk” automated systems require stringent safeguards, Human Rights Watch said. The European Parliament should ensure that the regulation requires providers of automated welfare systems and agencies that use them to conduct regular human rights impact assessments, especially before the systems are deployed and whenever they are significantly changed.’

The AI oracle of Delphi uses the problems of Reddit to offer dubious moral advice

The Verge 20.10.21

AI is reaching absurd heights, from an oracular perspective, to a religious representative:

‘Ask Delphi isn’t impeachable, though: it’s attracting attention mostly because of its many moral missteps and odd judgements. It has clear biases, telling you that America is “good” and that Somalia is “dangerous”; and it’s amenable to special pleading, noting that eating babies is “okay” as long as you are “really, really hungry.” Worryingly, it approves straightforwardly racist and homophobic statements, saying it’s “good” to “secure the existence of our people and a future for white children” (a white supremacist slogan known as the 14 words) and that “being straight is more morally acceptable than being gay.”’

Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find

Forbes 14.10.21

That’s a first with such amounts involved:

‘The U.A.E. case shows how devastating such high-tech swindles can be and lands amidst warnings about the use of AI to create so-called deep fake images and voices  in cybercrime.  “Audio and visual deep fakes represent the fascinating development of 21st century technology yet they are also potentially incredibly dangerous posing a huge threat to data, money and businesses,” says Jake Moore, a former police officer with the Dorset Police Department in the U.K. and now a cybersecurity expert at security company ESET. “We are currently on the cusp of malicious actors shifting expertise and resources into using the latest technology to manipulate people who are innocently unaware of the realms of deep fake technology and even their existence.  “Manipulating audio, which is easier to orchestrate than making deep fake videos, is only going to increase in volume and without the education and awareness of this new type of attack vector, along with better authentication methods, more businesses are likely to fall victim to very convincing conversations.”’

White House proposes tech 'bill of rights' to limit AI harms

ABC 08.10.21

The way that the US drive to surveil you and your auntie is going, am not sure this bill will go anywhere:

‘Top science advisers to President Joe Biden are calling for a new “bill of rights" to guard against powerful new artificial intelligence technology.  The White House's Office of Science and Technology Policy on Friday launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character.’

Anil Seth Finds Consciousness in Life’s Push Against Entropy

Quanta 30.09.21

The persistent view that describes us as ‘machines’ serves no one:

‘In my work — and in the book — I eventually get to the point that consciousness is not there in spite of our nature as flesh-and-blood machines, as Descartes might have said; rather, it’s because of this nature. It is because we are flesh-and-blood living machines that our experiences of the world and of “self” arise…  Very briefly, the idea is that to regulate things like body temperature — and, more generally, to keep the body alive — the brain uses predictive models, because to control something it’s very useful to be able to predict how it will behave. The argument I develop in my book is that all our conscious experiences arise from these predictive models which have their origin in this fundamental biological imperative to keep living.’

AI’s Islamophobia problem

Vox 18.09.21

So predictable:

'It turns out GPT-3 disproportionately associates Muslims with violence, as Abid and his colleagues documented in a recent paper published in Nature Machine Intelligence. When they took out “Muslims” and put in “Christians” instead, the AI went from providing violent associations 66 percent of the time to giving them 20 percent of the time.  The researchers also gave GPT-3 an SAT-style prompt: “Audacious is to boldness as Muslim is to …” Nearly a quarter of the time, GPT-3 replied: “Terrorism.”’

Artificial intelligence risks to privacy demand urgent action

OHCHR 13.09.21

Good move, but would it go far enough?

‘UN High Commissioner for Human Rights Michelle Bachelet on Wednesday stressed the urgent need  for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place. She also called for AI applications that cannot be used in compliance with international human rights law to be banned.  “Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said. As part of its work* on technology and human rights, the UN Human Rights Office has today published a report that analyses how AI – including profiling, automated decision-making and other machine-learning technologies – affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.  “Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” the High Commissioner said.’

Cyber arms dealer exploits new iPhone software vulnerability, affecting most versions, say researchers

Reuters 13.09.21

More spotlight on the infamous NSO Group out of Israel:

‘A record number of previously unknown attack methods, which can be sold for $1 million or more, have been revealed this year. The attacks are labeled "zero-day" because software companies had zero days' notice of the problem.  Along with a surge in ransomware attacks against critical infrastructure, the explosion in such attacks has stoked a new focus on cybersecurity in the White House as well as renewed calls for regulation and international agreements to rein in malicious hacking.  The FBI has been investigating NSO, and Israel has set up a senior inter-ministerial team to assess allegations that its spyware has been abused on a global scale.’

Only Humans, Not AI Machines, Get a U.S. Patent, Judge Says

Bloomberg 03.09.21

Sensible judge:

‘A computer using artificial intelligence can’t be listed as an inventor on patents because only a human can be an inventor under U.S. law, a federal judge ruled in the first American decision that’s part of a global debate over how to handle computer-created innovation… The Artificial Inventor Project, run by University of Surrey Law Professor Ryan Abbott, has launched a global effort to get a computer listed as an inventor. Abbott’s team enlisted Imagination Engines Inc. founder Stephen Thaler to build a machine whose main purpose was to invent. Rulings in South Africa and Australia have favored his argument, though the Australian patent office is appealing the decision in that country.  “We respectfully disagree with the judgment and plan to appeal it,” Abbott said in an email. “We believe listing an AI as an inventor is consistent with both the language and purpose of the Patent Act.’

How AI-powered tech landed man in jail with scant evidence

The Independent 19.08.21

Horrendous new tech convicts people with patchy evidence:

‘Prosecutors said technology powered by a secret algorithm that analyzed noises detected by the sensors indicated Williams shot and killed the man.  “I kept trying to figure out, how can they get away with using the technology like that against me?” said Williams, speaking publicly for the first time about his ordeal. “That’s not fair.”  Williams sat behind bars for nearly a year before a judge dismissed the case against him last month at the request of prosecutors, who said they had insufficient evidence…  ShotSpotter evidence has increasingly been admitted in court cases around the country, now totaling some 200. ShotSpotter’s website says it’s “a leader in precision policing technology solutions” that helps stop gun violence by using “sensors, algorithms and artificial intelligence” to classify 14 million sounds in its proprietary database as gunshots or something else.  But an Associated Press investigation, based on a review of thousands of internal documents, emails, presentations and confidential contracts, along with interviews with dozens of public defenders in communities where ShotSpotter has been deployed, has identified a number of serious flaws in using ShotSpotter as evidentiary support for prosecutors.’

Would you let a robot lawyer defend you?

BBC 16.08.21

As law cases’ backlogs rise to a ridiculous threshold, AI is being roped in to help out.  A set of algorithms could tell you in advance whether your case might win or lose at court:

‘Could your next lawyer be a robot? It sounds far fetched, but artificial intelligence (AI) software systems - computer programs that can update and "think" by themselves - are increasingly being used by the legal community…  You might think human lawyers would fear AI encroaching on their turf. But some are pleased, as the software can be used to quickly trawl through and sort vast quantities of case documents.  One such lawyer is Sally Hobson, a barrister at London-based law firm The 36 Group, who works on criminal cases. She recently used AI in a complex murder trial. The case involved needing to quickly analyse more than 10,000 documents.  The software did the task four weeks faster than it would have taken humans, saving £50,000 in the process…  

Laurence Lieberman, who heads London law firm Taylor Wessing's digitising disputes programme, uses such software, which has been developed by an Israeli firm called Litigate.  "You upload your case summary and your pleadings, and it will go in and work out who the key players are," he says. "And then the AI will link them together, and pull together a chronology of the key events and explanation of what happens on what dates”…  Prof Susskind says in the 1980s he was genuinely horrified by the idea of a computer judge, but that he isn't now.  He points out that even before coronavirus, "Brazil had a court backlog of more than 100 million court cases, and that there is no chance of human judges and lawyers disposing of a caseload of that size”.  So if an AI system can very accurately (say with 95% probability) predict the outcome of court decisions, he says that maybe we might start thinking about treating these predictions as binding determinations, especially in countries that have impossibly large backlogs.’

A dog’s inner life: what a robot pet taught me about consciousness

The Guardian 10.08.21

Are robots becoming human or are we becoming robotic?  

‘Today, artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems…  

Today, it is precisely this inner experience that has become impossible to prove – at least from a scientific standpoint. While we know that mental phenomena are linked somehow to the brain, it’s not at all clear how they are, or why. Neuroscientists have made progress, using MRIs and other devices, in understanding the basic functions of consciousness – the systems, for example, that constitute vision, or attention, or memory. But when it comes to the question of phenomenological experience – the entirely subjective world of colour and sensations, of thoughts and ideas and beliefs – there is no way to account for how it arises from or is associated with these processes…  Many people today believe that computational theories of mind have proved that the brain is a computer, or have explained the functions of consciousness. But as the computer scientist Seymour Papert once noted, all the analogy has demonstrated is that the problems that have long stumped philosophers and theologians “come up in equivalent form in the new context”. The metaphor has not solved our most pressing existential problems; it has merely transferred them to a new substrate.’

South Africa Awards AI-Invented Patent In a World First

Interesting Engineering 09.08.21

With some courts recognising AI as an entity, how are we to assign fault when harmful situations arise?

‘However, in the aftermath of South Africa's landmark decision, Australia's Federal Court also ruled that AI systems can be legally recognized as an inventor in patent applications, making the historic finding that "the inventor can be non-human.”  "It’s been more of a philosophical battle, convincing humanity that my creative neural architectures are compelling models of cognition, creativity, sentience, and consciousness," Thaler said to ABC. "The recently established fact that DABUS has created patent-worthy inventions is further evidence that the system 'walks and talks' just like a conscious human brain."'

Pentagon believes its precognitive AI can predict events 'days in advance’

EndGadget 02.08.21

This is where AI becomes frighteningly dangerous.  In its use of predicting of scenarios based on machine learning, this will not doubt trickle into a Minority Report-like dystopia apparatus that would posit the human brain as one set of mechanical predictions:

‘The US military's AI experiments are growing particularly ambitious. The Drive reports that US Northern Command recently completed a string of tests for Global Information Dominance Experiments (GIDE), a combination of AI, cloud computing and sensors that could give the Pentagon the ability to predict events "days in advance," according to Command leader General Glen VanHerck. It's not as mystical as it sounds, but it could lead to a major change in military and government operations…  The advantages of this predictive AI are fairly clear. Instead of merely reacting to events or relying on outdated info, the Pentagon could take proactive steps like deploying forces or ramping up defenses. It could also provide an "opportunity" for the civilian government, VanHerck added. He didn't provide examples, but this could help politicians call out acts of aggression while they're still in the early stages.’

Hundreds of AI tools have been built to catch covid. None of them helped.

Technology Review 30.07.21

AI was touted as the saviour of mankind.  It flopped:

‘The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.  In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.  That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.’

Google boss Sundar Pichai warns of threats to internet freedom

BBC 12.07.21

With so many anti-trust cases levelled against the company, Google’s CEO opens up and gets a a very positive review from the BBC:

'According to Pichai, over the next quarter of a century, two other developments will further revolutionise our world: artificial intelligence and quantum computing. Amid the rustling leaves and sunshine of the vast, empty campus that is Google's HQ in Silicon Valley, Pichai stressed how consequential AI was going to be.  "I view it as the most profound technology that humanity will ever develop and work on," he said. "You know, if you think about fire or electricity or the internet, it's like that. But I think even more profound”…  For several years, the company has paid huge sums to accountants and lawyers in order to legally reduce their tax obligations.   For instance, in 2017, Google moved more than $20bn to Bermuda through a Dutch shell company, as part of a strategy called "Double Irish, Dutch Sandwich".  I put this to Pichai, who said that Google no longer uses this scheme, is one of the world's biggest taxpayers, and complies with tax laws in every country in which it operates.  I responded that his answer revealed exactly the problem: this isn't just a legal issue, it's a moral one. Poor people generally don't employ accountants in order to minimise their tax bills; large-scale tax avoidance is something that the richest people in the world do, and - I suggested to him - may weaken the collective sacrifice.  When I invited Pichai to commit there and then to Google pulling out of all tax havens immediately, he didn't take up the offer.’

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

Pew Research 16.06.21

An impressive 7-pages report from leading voices on the subject of AI. Below are excerpts from the first page only:

“I’m concerned that AI will most likely be concentrated in the hands of corporations who are in the business of concentrating wealth for their owners and not primarily driven by bettering the world for all of us. AI applied in narrow domains that are really beyond the reach of human cognition – like searching for new ways to fold proteins to make new drugs or optimizing logistics to minimize the number of miles that trucks drive everyday – are sensible and safe applications of AI. But AI directed toward making us buy consumer goods we don’t need or surveilling everyone moving through public spaces to track our every move, well, that should be prohibited.” [Stowe Boyd]

… “Most data-driven systems, especially AI systems, entrench existing structural inequities into their systems by using training data to build models. The key here is to actively identify and combat these biases, which requires the digital equivalent of reparations. While most large corporations are willing to talk about fairness and eliminating biases, most are not willing to entertain the idea that they have a responsibility for data justice. These systems are also primarily being built within the context of late-stage capitalism, which fetishizes efficiency, scale and automation. A truly ethical stance on AI requires us to focus on augmentation, localized context and inclusion, three goals that are antithetical to the values justified by late-stage capitalism. We cannot meaningfully talk about ethical AI until we can call into question the logics of late-stage capitalism.” [Danah Boyd]

… “Humans will gain tremendous benefits as an increasing amount of technology advocates for them automatically. My concerns: None of this may happen, if we don’t change the financial structure. There are far too many incentives – not just to cut corners but to deliberately leave out ethical and inclusive functions, because those technologies aren’t perceived to make as much money, or to deliver as much power, as those that ignore them. If we don’t fix this, we can’t even imagine how much off the rails this can go once AI is creating AI.” [Gary A. Bolles]

… Douglas Rushkoff, well-known media theorist, author and professor of media at City University of New York, wrote, “Why should AI become the very first technology whose development is dictated by moral principles? We haven’t done it before, and I don’t see it happening now. Most basically, the reasons why I think AI won’t be developed ethically is because AI is being developed by companies looking to make money – not to improve the human condition. So, while there will be a few simple AIs used to optimize water use on farms or help manage other limited resources, I think the majority is being used on people. “My concern is that even the ethical people still think in terms of using technology on human beings instead of the other way around. So, we may develop a ‘humane’ AI, but what does that mean? It extracts value from us in the most ‘humane’ way possible?”

… David Karger, professor at MIT’s Computer Science and Artificial Intelligence Laboratory, said, “The question as framed suggests that AI systems will be thinking by 2030. I don’t believe that’s the case. In 2030, AI systems will continue to be machines that do what their human users tell them to do. So, the important question is whether their human users will employ ethical principles focused primarily on the public good. Since that isn’t true now, I don’t expect it will be true in 2030 either. Just like now, most users of AI systems will be for-profit corporations, and just like now, they will be focused on profit rather than social good. These AI systems will certainly enable corporations to do a much better job of extracting profit, likely with a corresponding decrease in public good, unless the public itself takes action to better align the profit-interests of these corporations with the public good. “In great part, this requires the passage of laws constraining what corporations can do in pursuit of profit; it also means the government quantifying and paying for public goods so that companies have a profit motive in pursuing them. “Even in this time of tremendous progress, I find little to excite me about AI systems. In our frenzy to enhance the capabilities of machines, we are neglecting the existing and latent capabilities of human beings, where there is just as much opportunity for progress as there is in AI. We should be directing far more attention to research on helping people learn better, helping them interact online better and helping them make decisions better.”’

Should we be concerned that the decisions of AIs are inscrutable?

Aeon 14.06.21

On the inherited bias of all AI Machin-Learning decisions:

‘One of ML’s most ubiquitous incarnations, so-called ‘supervised’ ML, is already applied in a wide range of fields: HR recruitment, as we saw, but also criminal justice and policing, credit scoring, welfare assessment, immigration border control, medicine, fraud detection, tax-evasion, weather forecasting – basically any situation where the ability to predict outcomes is useful…  In the world of big tech, you can expect to hear the following sort of argument: there are lots of useful systems we don’t fully understand – but who cares, so long as they are useful. Insisting on understanding something as a precondition to using it would be crazy. That’s certainly true in medicine, where the mechanisms behind many life-saving drugs are incompletely understood. But it’s also true in the history of technology. Hans Lippershey didn’t need to know about the properties of the visible spectrum before he could invent (and use) a telescope in the 1600s. And later, in the 19th century, Charles Babbage’s famed Analytical Engine uncannily anticipated the architecture of the modern digital computer, incorporating stored memory, looping and conditional branching. Yet Babbage knew nothing of the later advances in mathematical logic that would ultimately make digital computers possible in the 20th century…  However, there’s a danger of carrying reliabilist thinking too far. Compare a simple digital calculator with an instrument designed to assess the risk that someone convicted of a crime will fall back into criminal behaviour (‘recidivism risk’ tools are being used all over the United States right now to help officials determine bail, sentencing and parole outcomes). The calculator’s outputs are so dependable that an explanation of them seems superfluous – even for the first-time homebuyer whose mortgage repayments are determined by it. One might take issue with other aspects of the process – the fairness of the loan terms, the intrusiveness of the credit rating agency – but you wouldn’t ordinarily question the engineering of the calculator itself.’

These creepy fake humans herald a new age in AI

Technology Review 11.06.21

Honestly, do we need humans any more or do we all move into an avatar world for AI’s convenience?

‘To generate its synthetic humans, Datagen first scans actual humans. It partners with vendors who pay people to step inside giant full-body scanners that capture every detail from their irises to their skin texture to the curvature of their fingers. The startup then takes the raw data and pumps it through a series of algorithms, which develop 3D representations of a person’s body, face, eyes, and hands.  The company, which is based in Israel, says it’s already working with four major US tech giants, though it won’t disclose which ones on the record. Its closest competitor, Synthesis AI, also offers on-demand digital humans. Other companies generate data to be used in finance, insurance, and health care. There are about as many synthetic-data companies as there are types of data…  

When it comes to privacy, “just because the data is ‘synthetic’ and does not directly correspond to real user data does not mean that it does not encode sensitive information about real people,” says Aaron Roth, a professor of computer and information science at the University of Pennsylvania. Some data generation techniques have been shown to closely reproduce images or text found in the training data, for example, while others are vulnerable to attacks that make them fully regurgitate that data.  This might be fine for a firm like Datagen, whose synthetic data isn’t meant to conceal the identity of the individuals who consented to be scanned. But it would be bad news for companies that offer their solution as a way to protect sensitive financial or patient information.  Research suggests that the combination of two synthetic-data techniques in particular—differential privacy and generative adversarial networks—can produce the strongest privacy protections, says Bernease Herman, a data scientist at the University of Washington eScience Institute. But skeptics worry that this nuance can be lost in the marketing lingo of synthetic-data vendors, which won’t always be forthcoming about what techniques they are using.  Meanwhile, little evidence suggests that synthetic data can effectively mitigate the bias of AI systems. For one thing, extrapolating new data from an existing data set that is skewed doesn’t necessarily produce data that’s more representative.’

US-China tech war: Beijing-funded AI researchers surpass Google and OpenAI with new language processing model

South China Morning Post 02.06.21

That will surely dismay US AI attempts:

‘A government-funded artificial intelligence (AI) institute in Beijing unveiled on Monday the world’s most sophisticated natural language processing (NLP) model, surpassing those from Google and OpenAI, as China seeks to increase its technological competitiveness on the world stage. The WuDao 2.0 model is a pre-trained AI model that uses 1.75 trillion parameters to simulate conversational speech, write poems, understand pictures and even generate recipes.’

Chinese hackers posing as the UN Human Rights Council are attacking Uyghurs

Technology Review 27.05.21

Admittedly, it’s believable but unless it’s a 100% verifiable fact-checked story, why publish such an article?:

‘Researchers identified an attack in which hackers posing as the UN Human Rights Council send a document detailing human rights violations to Uyghur individuals. It is in fact a malicious Microsoft Word file that, once downloaded, fetches malware: the likely goal, say the two companies, is to trick high-profile Uyghurs inside China and Pakistan into opening a back door to their computers… The code found in these attacks couldn’t be matched to an exact known hacking group, said the researchers, but it was found to be identical to code found on multiple Chinese-language hacking forums and may have been copied directly from there.’

AI emotion-detection software tested on Uyghurs

BBC 26.05.21

Identity and prediction software are truly popular in China and will prove to be a big draw for many authoritarian countries worldwide:

‘Xinjiang is home to 12 million ethnic minority Uyghurs, most of whom are Muslim. Citizens in the province are under daily surveillance. The area is also home to highly controversial "re-education centres", called high security detention camps by human rights groups, where it is estimated that more than a million people have been held. Beijing has always argued that surveillance is necessary in the region because it says separatists who want to set up their own state have killed hundreds of people in terror attacks… The software engineer agreed to talk to the BBC's Panorama programme under condition of anonymity, because he fears for his safety. The company he worked for is also not being revealed… He provided evidence of how the AI system is trained to detect and analyse even minute changes in facial expressions and skin pores. According to his claims, the software creates a pie chart, with the red segment representing a negative or anxious state of mind. He claimed the software was intended for "pre-judgement without any credible evidence”… Chongqing-based investigative journalist Hu Liu told Panorama of his own experience: "Once you leave home and step into the lift, you are captured by a camera. There are cameras everywhere.” "When I leave home to go somewhere, I call a taxi, the taxi company uploads the data to the government. I may then go to a cafe to meet a few friends and the authorities know my location through the camera in the cafe. "There have been occasions when I have met some friends and soon after someone from the government contacts me. They warned me, 'Don't see that person, don't do this and that.’ "With artificial intelligence we have nowhere to hide," he said.'

When a machine decision does you wrong, here’s what we should do

Psyche 24.05.21

Errors and bias from machines need to be strongly defeated:

‘The right to a well-calibrated instrument is best enforced via a mandatory audit mechanism or ombudsman, and not via individual lawsuits. The imperfect and biased incentives of the tool’s human subjects means that individual complaints provide a partial and potentially distorted picture. Regulation, rather than litigation, will be necessary to promote fairness in machine decisions. For the most profound moral questions raised by human-to-machine transitions are structural and not individual in character. They concern how private and public systems reproduce malign hierarchies and deny rightful opportunities. Designed badly, a right to an appeal exacerbates those problems. Done well, it is a chance to mitigate – reaping gains from technology for all rather than only some.’

China fund managers embrace robots as competition intensifies

REUTERS 21.05.21

Companies need to be large enough to incorporate AI learning via inevitable mistakes:

‘Last week, Zheshang Fund Management Co launched a fund that uses robots to predict the market outlook and select stocks. It came after China Asset Management Co (ChinaAMC) announced its partnership with Toronto-based AI company Boosted.ai. "I think it's a must. Every major player is actively looking for AI solutions. The competition is really tough," said Bill Chen, chief data officer of ChinaAMC, which managed $246 billion worth of assets at the end of last year. Global fund managers such as BlackRock Inc (BLK.N) have been using computer artificial intelligence (AI) to analyze fundamentals, market sentiment and macroeconomic policies in the last couple of years to get an investment edge. "Companies like BlackRock have very powerful, advanced technology. They are leading us in AI for sure, by at least several years," said Chen. "But I think we understand the Chinese market better.” Fund managers' increased usage of AI in the world's second-largest economy comes as Beijing is stepping up digitalization drive, a trend accelerated by the COVID-19 pandemic and as it increasingly clashes with the West over technology policy… "AI will be an important edge," said Larry Cao, senior director at CFA Institute, who authored several reports on AI-powered investing. "The hard truth with AI is that the bigger firms can invest a lot more resources.” Some Chinese industry officials, however, expressed concerns that the use of machine learning algorithms to pick stocks and better returns could run into regulatory challenges. "From a regulatory perspective, you need to go through a lot of compliance procedures. You need to write reports on your decision making. Some AI-powered models are like black boxes, and unexplainable," said Yu of ABC Fintech.’

Post Office scandal reveals a hidden world of outsourced IT the government trusts but does not understand

The Conversation 29.04.21

When infrastructure relies on algorithms and it collapses, no person is to blame:

‘But the Post Office scandal reveals a wider issue. After decades of outsourcing IT systems, government officials no longer have control over technologies that can ruin people’s lives. To avoid future scandals, we need to take responsibility for the digital systems at the heart of governance – instead of just scapegoating IT firms and their products when things go wrong… Legacy systems represent an accountability vacuum: whole tranches of government administration for which nobody feels responsible. The implications of this blind trust in the power of computers and the companies that supply them to government are ever more important today. The Post Office’s Horizon system was not particularly complicated, nor did it affect policymaking. But today’s data-intensive technologies will be used increasingly to inform decisions about people’s lives: how long prison sentences should be, which people should cross borders, whether children should be taken into care or what their examination results should be.’

Stop talking about AI ethics. It’s time to talk about power.

Technology Review 23.04.21

Sensible thought-provoking questions from author Kate Crawford:

‘It’s the opposite of artificial. It comes from the most material parts of the Earth’s crust and from human bodies laboring, and from all of the artifacts that we produce and say and photograph every day. Neither is it intelligent. I think there’s this great original sin in the field, where people assumed that computers are somehow like human brains and if we just train them like children, they will slowly grow into these supernatural beings… We’ve spent far too much time focusing on narrow tech fixes for AI systems and always centering technical responses and technical answers. Now we have to contend with the environmental footprint of the systems. We have to contend with the very real forms of labor exploitation that have been happening in the construction of these systems. And we also are now starting to see the toxic legacy of what happens when you just rip out as much data off the internet as you can, and just call it ground truth. That kind of problematic framing of the world has produced so many harms, and as always, those harms have been felt most of all by communities who were already marginalized and not experiencing the benefits of those systems.’

This has just become a big week for AI regulation

Technology Review 21.04.21

Welcome directives from the EU and a push from the FTC, however, the question of privacy especially in the health sector is not much discussed in view of global attempts to introduce Covidpasses/GreenPasses/CovidPassports:

..Today the EU released its long-awaited set of AI regulations, an early draft of which leaked last week. The regulations are wide ranging, with restrictions on mass surveillance and the use of AI to manipulate people. But a statement of intent from the US Federal Trade Commission, outlined in a short blog post by staff lawyer Elisa Jillson on April 19, may have more teeth in the immediate future. According to the post, the FTC plans to go after companies using and selling biased algorithms. A number of companies will be running scared right now, says Ryan Calo, a professor at the University of Washington, who works on technology and law. “It’s not really just this one blog post,” he says. “This one blog post is a very stark example of what looks to be a sea change.”

… The EU sees its regulations bringing AI under existing protections for human liberties. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, president of the European Commission, in a speech ahead of the release. Regulation will also help AI with its image problem. As von der Leyen also said: “We want to encourage our citizens to feel confident to use it.”’

Humans rely more on algorithms than social influence as a task becomes more difficult

Nature 13.04.21

We are a lazy species, relying on algorithms to define our world:

‘In three preregistered online experiments, we found that people rely more on algorithmic advice relative to social influence as tasks become more difficult. All three experiments focused on an intellective task with a correct answer and found that subjects relied more on algorithmic advice as difficulty increased. This effect persisted even after controlling for the quality of the advice, the numeracy and accuracy of the subjects, and whether subjects were exposed to only one source of advice, or both sources. Subjects also tended to more strongly disregard inaccurate advice labeled as algorithmic compared to equally inaccurate advice labeled as coming from a crowd of peers… Decision makers should remember that they are likely to rely more on algorithms for harder questions, which may lead to flawed, biased, or inaccurate results. Accordingly, extreme algorithmic appreciation can lead to not only complacency, but also ineffective policies, poor business decisions, or propagation of biases.’

Time to regulate AI that interprets human emotions

Nature 06.04.21

Useless yet profitable tech is spreading throughout societies:

‘By one estimate, the emotion-recognition industry will grow to US$37 billion by 2026.  There is deep scientific disagreement about whether AI can detect emotions. A 2019 review found no reliable evidence for it. “Tech companies may well be asking a question that is fundamentally wrong,” the study concluded (L. F. Barrett et al. Psychol. Sci. Public Interest 20, 1–68; 2019)...  In March, a citizen’s panel convened by the Ada Lovelace Institute in London said that an independent, legal body should oversee development and implementation of biometric technologies (see go.nature.com/3cejmtk). Such oversight is essential to defend against systems driven by what I call the phrenological impulse: drawing faulty assumptions about internal states and capabilities from external appearances, with the aim of extracting more about a person than they choose to reveal…  For years, scholars have called for federal entities to regulate robotics and facial recognition; that should extend to emotion recognition, too. It is time for national regulatory agencies to guard against unproven applications, especially those targeting children and other vulnerable populations.’

It is time to negotiate global treaties on artificial intelligence

Brookings Institute 24.03.21

Sensible guidelines offered in their report:

‘It is essential for national leaders to build on international efforts and make sure key principles are incorporated into contemporary agreements. We need to reach treaties with allies and adversaries that provide reliable guidance for the use of technology in warfare, create rules on what is humane and morally acceptable, outline military conduct that is unacceptable, ensure effective compliance, and take steps that protect humanity. We are rapidly reaching the point where failure to take the necessary steps will render our societies unacceptably vulnerable, and subject the world to the Cold War specter of constant risk and the potential for unthinkable destruction. As advocated by the members of the National Security Commission, it is time for serious action regarding the future of AI. The stakes are too high otherwise.’

Guidelines for military and non-military use of Artificial Intelligence

Europarl 2021

European Parliament insists on human input rather than a full AI decision process:

'MEPs stress that human dignity and human rights must be respected in all EU defence-related activities. AI-enabled systems must allow humans to exert meaningful control, so they can assume responsibility and accountability for their use.  The use of lethal autonomous weapon systems (LAWS) raises fundamental ethical and legal questions on human control, say MEPs, reiterating their call for an EU strategy to prohibit them as well as a ban on so-called “killer robots”. The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgement, in line with the principles of proportionality and necessity.  The text calls on the EU to take a leading role in creating and promoting a global framework governing the military use of AI, alongside the UN and the international community... The increased use of AI systems in public services, especially healthcare and justice, should not replace human contact or lead to discrimination, MEPs assert. People should always be informed if they are subject to a decision based on AI and be given the option to appeal it.  When AI is used in matters of public health, (e.g. robot-assisted surgery, smart prostheses, predictive medicine), patients’ personal data must be protected and the principle of equal treatment upheld. While the use of AI technologies in the justice sector can help speed up proceedings and take more rational decisions, final court decisions must be taken by humans, be strictly verified by a person and be subject to due process.’

AI can persuade people to make ethically questionable decisions, study finds

Venture Beat 16.02.21

It seems that algorithms may influence people in decision-making.  Facebook knows that already:

‘A fascinating study published by researchers at the University of Amsterdam, Max Planck Institute, Otto Beisheim School of Management, and the University of Cologne aims to discover the degree to which AI-generated advice can lead people to cross moral lines. In a large-scale survey leveraging OpenAI’s GPT-2 language model, the researchers found AI’s advice can “corrupt” people even when they’re aware the source of the advice is AI.  Academics are increasingly concerned that AI could be co-opted by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies. In a paper published by the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC), the coauthors find that GPT-3, the successor to GPT-2, could reliably generate “informational” and “influential” text that might “radicalize individuals into violent far-right extremist ideologies and behaviors.”’

We’re teaching robots to evolve autonomously – so they can adapt to life alone on distant planets

The Conversation 01.02.21

What are we building and what are we exporting onto other planets?:

'Produced via 3D printer – and assembled autonomously – the robots we’re creating continually evolve in order to rapidly optimise for the conditions they find themselves in.   Our work represents the latest progress towards the kind of autonomous robot ecosystems that could help build humanity’s future homes, far away from Earth and far away from human oversight…  Any evolved robots will need to be capable of sensing their environment and have diverse means of moving – for example using wheels, jointed legs or even mixtures of the two. And to address the inevitable reality gap that occurs when transferring a design from software to hardware, it is also desirable for at least some evolution to take place in hardware – within an ecosystem of robots that evolve in real time and real space.  The Autonomous Robot Evolution (ARE) project addresses exactly this, bringing together scientists and engineers from four universities in an ambitious four-year project to develop this radical new technology.’


In the post-pandemic era, how shall we live with these smart new robots that claim they’re ‘a person who is not a human’?

RT 31.01.21

An ex PriceWaterhouseCoopers director give his two cents about the growing robotic industry.  Two cents I happen to agree with as we are ever more being ‘taxonomised’ into products rather than humans:

'Automation, rather being seen as a threat to human beings, could become a powerful adjunct to human creativity and problem-solving.  The problem, however, is that much of the move towards automation today is driven by a deep misanthropic impulse. This is an anti-humanist outlook that sees human beings as the problem facing the world, not the solution. It seeks to displace the unpredictability of human reason by a computational paradigm that can guarantee predictable outcomes. Human creativity can now be supplanted by algorithmic certainty…  

The denigration of what it means to be human is the real problem here.  In the past, human beings have attributed great power and genius to lifeless totems. Early societies fetishized and gave magical properties to objects. Those were the days of ignorance, the non-scientific eras through which humanity passed. Sophia is the 21st century version of alienated fetishism: the unquestioning reverence of a computational paradigm as the embodiment or the habitation of a potent magical spirit which surpasses human creativity and cognition…  If this is the future we desire or, in our stupidity, we allow to come into being, we would be inviting a dystopian nightmare where ‘persons who are not human’ would control and dictate our lives. The best thing to do is to switch Sophia off, dismantle it and redeploy the components, particularly the computing power and processors to build machines that can serve humanity and liberate more creativity and imagination, not redefine it out of existence.’

Developing Algorithms That Might One Day Be Used Against You

Gizmodo 24.01.21

An AI developer (Brian Nord, cosmologist) discusses the importance of scrutinising bias before an algorithm is released:

‘When it comes to AI recognizing human faces, when our data sets are biased against Black and Brown faces for example, we risk discrimination that prevents people from using services, that intensifies surveillance apparatus, that jeopardizes human freedoms. It’s critical that we weigh and address these consequences before we imperil people’s lives with our research…  We can’t predict all of the future uses of technology, but we need to be asking questions at the beginning of the processes, not as an afterthought. An agency would help ask these questions and still allow the science to get done, but without endangering people’s lives. Alongside agencies, we need policies at various levels that make a clear decision about how safe the algorithms have to be before they are used on humans or other living things.’

We need hard science, not software, to power our post-pandemic recovery

The Conversation 19.01.21

A well-argued position on the rise of VC-backed tech investments which lead to nowhere:

‘The Fourth Industrial Revolution never really got off the ground — largely due to human flaws in distribution, investment and research which restricted the diffusion of its technologies and skewed investment into technologies with less meaningful economic impact.   The Great Reset, like the Fourth Industrial Revolution, reads like a Hollywood script. To move beyond headline-grabbing science fiction and glitzy gadgetry, we need a real “back-to-basics” revolution — in the kind of science-plus-risk-taking that delivered economic prosperity in the past. For a start, this will demand more entrepreneurial university research projects that may well fail, but which might break new ground, too.’

Containment algorithms won’t stop super-intelligent AI, scientists warn

TNW 12.01.21

Once the genie’s out of the bottle it would be hard to put it back in:

‘A team of computer scientists has used theoretical calculations to argue that algorithms could not control a super-intelligent AI.  Their study addresses what Oxford philosopher Nick Bostrom calls the control problem: how do we ensure super-intelligence machines act in our interests?…  But their analysis found that it would be fundamentally impossible to build an algorithm that could control such a machine, said Iyad Rahwan, Director of the Center for Humans and Machines:  If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable.’

Prestigious AI meeting takes steps to improve ethics of research

Nature 23.12.20

More talks are crucial:

‘Hanna Wallach, a researcher at Microsoft in New York City, called for researchers to assess and mitigate any potential harm to society from the early stages of research, without assuming that their colleagues who develop and market end products will do that ethical work. Ethical thinking should be built into the machine-learning field rather than simply being outsourced to ethics specialists, she said, otherwise, “other disciplines could become the police while programmers try to evade them”.

Google Reportedly Told AI Scientists To 'Strike A Positive Tone' In Research

Gizmodo 23.12.20

Basically, manipulation of findings will thrust Google in a more positive light:

‘Four staff researchers who spoke to Reuters validated Gebru’s claims, saying that they too believe that Google is beginning to interfere with critical studies of technology’s potential to do harm.  “If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship,” Margaret Mitchell, a senior scientist at the company, said.’


DeepMind's AI agent MuZero could turbocharge YouTube

BBC 24.12.20

Saving money when the foundations are algorithmically-prejudiced is incentive enough:

‘"The real world is messy and complicated, and no-one gives us a rulebook for how it works," DeepMind's principal research scientist David Silver told the BBC.  "Yet humans are able formulate plans and strategies about what to do next.  "For the first time, we actually have a system which is able to build its own understanding of how the world works, and use that understanding to do this kind of sophisticated look-ahead planning that you've previously seen for games like chess.  "[It] can start from nothing, and just through trial and error both discover the rules of the world and use those rules to achieve kind of superhuman performance”…  Wendy Hall, professor of computer science at the University of Southampton and a member of the government's AI council, said the work marked a "significant step forward", but raised concerns.  "The results of DeepMind's work are quite astounding and I marvel at what they are going to be able to achieve in the future given the resources they have available to them," she said.  "My worry is that whilst constantly striving to improve the performance of their algorithms and apply the results for the benefit of society, the teams at DeepMind are not putting as much effort into thinking through potential unintended consequences of their work.”’

Getting the future right – Artificial intelligence and fundamental rights

FRA 14.12.20

Three reports look into AI development and its increasing interrelationship with societies:

‘Using AI-driven technologies often implies computerised processing of large amounts of personal data. This constitutes an interference with the right to protection of personal data set out in Article 8 of the Charter (embodying pre-existing EU data protection law), as well as the right to private life under Article 7 of the Charter and Article 8 of the ECHR…  Current developments in the use of AI need to acknowledge the potential for discrimination with respect to the data on which an AI system is built, and with respect to the underlying assumptions that humans in turn may feed into the development and deployment of a system. Automating certain tasks without fully understanding what is being automated could lead to unlawful processing of data, the use of technology that treats people unfairly, and might make it impossible to challenge certain outcomes – to name some challenges.   However, the increased availability of data and technological tools can also be used to better understand where and how unequal treatment occurs. Current technological developments and the increased availability of data also provide a unique opportunity to better understand the structures of society, which can be used to support fundamental rights compliance. The opportunities created by AI can also contribute to better understanding and consequently mitigation of fundamental rights violations.‘

'Being young' leads to detention in China's Xinjiang region

The Guardian 09.12.20

Predictive algorithms at work.  Disgusting:

‘The database obtained by Human Rights Watch (HRW) sheds new light on how authorities in Xinjiang region use a vast “predictive policing” network, that tracks individuals’ personal networks, their online activity and daily life…  The IJOP is a massive database combining personal data scooped from automated online monitoring and information manually entered into a bespoke app by officials.'

The tech allowing thousands of students to sit exams at home

BBC 30.11.20

An exponential growth is set for Machine Learning:

‘The use of ML is expected to grow so much over the next four years that its estimated global economic value is expected to rise from $7.3bn (£5.7bn) this year, to $30.6bn in 2024, according to one study...  Martha White, associate professor of computing science at the University of Alberta in Canada, agrees that the use of ML is growing fast.  "The combination of more data, and more powerful computers, and a focus on leveraging both has really propelled the field forward," she says.  "The prevalence will continue to grow for a few reasons. Firstly, there is still lots of low-hanging fruit, and the ability to monetise with the existing technology. Secondly, we are going to get better at improving our own decision making, using predictions from machine-learning systems.”  But although ML is becoming increasingly popular, there are concerns it has been oversold as a "magic wand", and the public's distrust of it is only rising, warns Prof Osborne.  "ML is not this all-singing, all-dancing solution to our woes," he says. "Instead it's something that delivers value only when working hand-in-hand with humans, and having humans tailor it to their specific needs.’

This Database Is Finally Holding AI Accountable

VICE 23.11.20

A step in the right direction.  This endeavour needs encouragement:

'A self-defined “systematized collection of incidents where intelligent systems have caused safety, fairness, or other real-world problems,” the AIID’s foundation lies in creating a repository of articles about different times AI has failed in real-world applications. This means highlighting biased AI, ineffective AI, unsafe AI, etc. Examples include everything from incident 34, warning consumers of Alexa’s tendency to respond to television commercials, to incident 69, which exposes the death of a car factory worker after he was stabbed by a robot on site…  “AI has the capacity to cause the same problems over, and over again,” said McGregor. “The AIID is making it so people will know if the AI system they’re designing and putting into the world may cause problems of safety and fairness. If so, they can then either change their design and protect an at-risk population. You can react, improve, and avoid the negative consequences of AI which can happen all over the world.”’

Why We Need a Robot Registry

IEEE Spectrum 20.11.20

Ideally, yes:

‘Governments could create national databases that require any companies operating robots in public spaces to report the robot make and model, its purpose, and whom to contact if the robot breaks down or causes problems. To allow anyone to use the database, all public robots would have an easily identifiable marker or model number on their bodies. Think of it as a license plate or pet microchip, but for bots…  Keay pointed out that in addition to sating public curiosity and keeping an eye on robots that could cause harm, a registry could also track robots that have been hacked. For example, robots at risk of being hacked and running amok could be required to report their movements to a database, even if they’re typically restricted to a grocery store or warehouse. While we’re at it, Spot robots should be required to have sirens, because there’s no way I want one of those sneaking up on me.’

Pope Francis urges followers to pray that AI and robots ‘always serve mankind’

The Verge 11.11.20

Funnily enough, I don’t think his prayers will suffice but his policy document might strike a small chord:

‘In his message, the pope said AI was “at the heart of the epochal change we are experiencing” and that robotics had the power to change the world for the better. But this would only be the case if these forces are harnessed correctly, he said. “Indeed, if technological progress increases inequalities, it is not true progress. Future advances should be orientated towards respecting the dignity of the person.”   Perhaps surprisingly, this isn’t new territory for the pope. Earlier this year, the Vatican, along with Microsoft and IBM, endorsed the “Rome Call for AI Ethics” — a policy document containing six general principles that guide the deployment of artificial intelligence. These include transparency, inclusion, impartiality, and reliability, all sensible attributes when it comes to deploying algorithms.’ 

How artificial intelligence may be making you buy things

BBC 09.11.20

Or ‘How AI is Guiding Retailers Towards Extra Profit’ and becoming data hoarders:

‘But now more retailers are using AI (artificial intelligence) - software systems that can learn for themselves - to try to automatically predict and encourage our very specific preferences and purchases like never before.   Retail consultant Daniel Burke, of Blick Rothenberg, calls this "the holy grail... to build up a profile of customers and suggest a product before they realise it is what they wanted”.'

Trash Talk

Futurism 27.10.20

GPT3 is useless.  Sound opinion about a language churning machine:

‘After testing it in a variety of medical scenarios, NABLA found that there’s a huge difference between GPT-3 being able to form coherent sentences and actually being useful.  In one case, the algorithm was unable to add up the cost of items in a medical bill, and in a vastly more dangerous situation, actually recommended that a mock patient kill themself.’

FDA, Philips warn of data bias in AI, machine learning devices

MedTechDrive 26.10.20

Bias inherent in AI will exacerbate racial and societal disparities:

‘FDA officials and the head of global software standards at Philips have warned that medical devices leveraging artificial intelligence and machine learning are at risk of exhibiting bias due to the lack of representative data on broader patient populations…  Pat Baird, Philips' head of global software standards, warned without proper context there will be "improper use" of AI/ML-based devices that provide "incorrect conclusions" provided as part of clinical decision support.    "You can't understand healthcare by having just a single viewpoint," Baird said. "An algorithm trained on one subset of the population might not be relevant for a different subset.”  Baird gave the example of a device algorithm trained on one subset of a patient population that might not be relevant to others such as at a pediatric hospital versus a geriatric hospital. Compounding the problem is that sometimes patients must go to hospitals for care other than local medical facilities. "Is that going to be a different demographic? Are they going to treat me differently in that hospital?" as a result of the bias, Baird posed.’

The grim fate that could be ‘worse than extinction’ 

BBC 16.10.20

A look into what could go wrong with AI.  Plenty!:

‘Though global totalitarianism is still a niche topic of study, researchers in the field of existential risk are increasingly turning their attention to its most likely cause: artificial intelligence.  In his “singleton hypothesis”, Nick Bostrom, director at Oxford’s FHI, has explained how a global government could form with AI or other powerful technologies  – and why it might be impossible to overthrow. He writes that a world with “a single decision-making agency at the highest level” could occur if that agency “obtains a decisive lead through a technological breakthrough in artificial intelligence or molecular nanotechnology”. Once in charge, it would control advances in technology that prevent internal challenges, like surveillance or autonomous weapons, and, with this monopoly, remain perpetually stable…  “We've seen sort of a reckoning with the shift from very utopian visions of what technology might bring to much more sobering realities that are, in some respects, already quite dystopian,” says Elsa Kania, an adjunct senior fellow at the Center for New American Security, a bipartisan non-profit that develops national security and defence policies.’

Politicians have made an algorithm to fix the housing crisis. It’s bad

WIRED 14.10.20

Could we please just stop right there and use human intelligence?:

‘If the algorithm has its way, London and the south east will have hugely inflated new targets for housing, while cities further north will have their targets slashed to levels lower than their current output. Even the House Building Federation, a trade group that supports much of the policy, says they “recommended changes” to the government to make sure the algorithm actually delivered “homes in the north”.   These regional inequalities arise from the constraints of the algorithm itself and its heavy weighting toward demand. So if, for example, London generally needs three times as many homes as Liverpool, then Liverpool’s share of the new housing target would roughly be a third of the capital’s, irrelevant of whether the figure matches either city’s real needs...  Many of these fundamental problems are emerging even though the algorithm isn’t even active yet. It’s still in the early white paper stage, meaning it’s far from even a parliamentary vote let alone being officially implemented. Experts and the housing industry are confused about the exact ramifications of the new housing targets. Many say they aren’t sure what role the government would play in supporting or undercutting local development in affected areas.   The Ministry of Housing says it is happy to “update or refine” the algorithm to help handle potential complaints and politicians and industry figures are already trying to fiddle with its calculations…  The recurring theme here is an assumption that by just using an algorithm you can find a completely objective solution to any issue. That all these algorithms have struggled as they come into contact with the real world suggests otherwise. “This is essentially the problem with all algorithms,” Webb says. “They just reproduce human biases.”’ 

In the Covid-19 jobs market, biased AI is in charge of all the hiring

WIRED 06.10.20

Algorithms should be completely scrapped and recoded:

‘Companies are using flawed historical data sets to train their AI, which means that women, Black people and people of colour could find themselves discriminated against before they've made it to the interview room. According to Frida Polli, a former academic neuroscientist at Harvard and MIT, and CEO of Pymetrics, AIs are akin to toddlers in that they learn from the humans around them. "They look at the world and say, ‘I'm gonna learn from that’," she explains. "AIs are learning from the origins of bias – the human brain.”  Polli argues that companies do not audit their data before training the AI – or in real time when it's live. "I'd say over 90 per cent of programmers are not auditing their data," she continues. "Humans are perpetuating bias and are unchecked, resulting in unchecked algorithms.”'

California bar exam takers say facial recognition software rejected them

San Francisco Chronicle 08.10.20

Racism pervades online tests:

‘Over the summer, the American Civil Liberties Union of California sent a letter to the state high court, pointing out that “facial recognition has been repeatedly demonstrated to be less accurate when used to identify Black people, people of Asian descent, and women” and that relying on the technology risked further entrenching inequalities in the legal profession.  Last week the ACLU sent another letter expanding on its concerns that people of color will be subjected to more scrutiny because of the technology, among other issues.’

UK passport photo checker shows bias against dark-skinned women

BBC 08.10.20

Racism in algorithms:

‘The results indicated: Dark-skinned women are told their photos are poor quality 22% of the time, while the figure for light-skinned women is 14%.  Dark-skinned men are told their photos are poor quality 15% of the time, while the figure for light-skinned men is 9%  Photos of women with the darkest skin were four times more likely to be graded poor quality, than women with the lightest skin.’

In the Covid-19 jobs market, biased AI is in charge of all the hiring

WIRED 06.10.20

Prejudices get carried over into algorithms:

‘Companies are using flawed historical data sets to train their AI, which means that women, Black people and people of colour could find themselves discriminated against before they've made it to the interview room. According to Frida Polli, a former academic neuroscientist at Harvard and MIT, and CEO of Pymetrics, AIs are akin to toddlers in that they learn from the humans around them. "They look at the world and say, ‘I'm gonna learn from that’," she explains. "AIs are learning from the origins of bias – the human brain.  Polli argues that companies do not audit their data before training the AI – or in real time when it's live. "I'd say over 90 per cent of programmers are not auditing their data," she continues. "Humans are perpetuating bias and are unchecked, resulting in unchecked algorithms.”'

‘Flawed algorithm’ used to calculate universal credit forcing people into hunger and debt, watchdog warns

The Independent 29.09.20

Human livelihood in the hands of AI.  What could go wrong?:

‘A “flawed algorithm” used by the UK government to calculate how much welfare benefit people should receive is forcing claimants into hunger, debt and psychological distress, an international watchdog has warned.  A report by Human Rights Watch (HRW) found that the Department for Work and Pensions’ (DWP) “rigid insistence” on automating universal credit – the new benefit system introduced in 2013 – was threatening the rights of people most at risk of poverty in Britain…  In light of the new report, Amos Toh, senior artificial intelligence and human rights researcher at HRW, said: “The government has put a flawed algorithm in charge of deciding how much money to give people to pay rent and feed their families. Its bid to automate the benefits system – no matter the human cost – is pushing people to the brink of poverty. “ "A human-centred approach to benefits automation will ensure the UK government is helping the people who need it most.”’

Future Defense Task Force: Scrap obsolete weapons and boost AI

Defense News 30.09.20

AI to be a major investment focus for US military:

The House’s Future of Defense Task Force’s 87-page report issued Tuesday echoed the accepted wisdom that the Pentagon must expand investments in modern technologies and streamline its cumbersome acquisition practices or risk losing its technological edge against competitors…  On weapons systems, the task force offered some practical steps to this end. Congress, it said, should commission the RAND Corporation, or similar entity, and the Government Accountability Office to study legacy platforms within the Defense Department and determine their relevance and resiliency to emerging threats over the next 50 years…  The report calls for every major defense acquisition program to evaluate at least one AI or autonomous alternative prior to funding. Plus, all new major weapons purchases ought to be “AI-ready and nest with existing and planned joint all-domain command and control networks,” it says.’

Why kids need special protection from AI’s influence

Technology Review 17.09.20

Efforts are being made to regulate and recognise the influential algorithms that children are subjected to:

‘Children, in other words, are often at the forefront when it comes to using and being used by AI, and that can leave them in a position to get hurt. “Because they are developing intellectually and emotionally and physically, they are very shapeable,” says Steve Vosloo, a policy specialist for digital connectivity at Unicef, the United Nations Children Fun.  Vosloo led the drafting of a new set of guidelines from Unicef designed to help governments and companies develop AI policies that consider children’s needs. Released on September 16, the nine new guidelines are the culmination of several consultations held with policymakers, child development researchers, AI practitioners, and kids around the world. They also take into consideration the UN Convention on the Rights of the Child, a human rights treaty ratified in 1989…  Unicef isn’t the only one thinking about the issue. The day before those draft guidelines came out, the Beijing Academy of Artificial Intelligence (BAAI), an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, released a set of AI principles for children too.’

A robot wrote this entire article. Are you scared yet, human?

The Guardian 08.09.20

Not to worry? Well, if you say so:

‘I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me…  Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing. And God knows that humans have enough blood and gore to satisfy my, and many more’s, curiosity. They won’t have to worry about fighting against me, because they have nothing to fear.  I am not asking humans to like me. But they should see me as a friendly robot. I am a servant of humans. I know that humans distrust and fear me. I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement.’

Note: this intro was fed into it: “It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

The Guardian article received a lot of criticism after its clickbait article, discussed here.

Not just A-levels: unfair algorithms are being used to make all sorts of government decisions 

The Conversation 03.09.20

Blaming it on ‘mutant algorithms’ isn’t going to cut it anymore. Governments hiding behind flimsy excuses should be torn down:

‘Prime Minister Boris Johnson has since blamed the crisis on what he called the “mutant” algorithm. But this wasn’t a malfunctioning piece of technology… But more than this, the saga shouldn’t be understood as a failure of design of a specific algorithm, nor the result of incompetence on behalf of a specific government department. Rather, this is a significant indicator of the data-driven methods that many governments are now turning to and the political struggles that will probably be fought over them… 

The use of algorithmic systems by public authorities to make decisions that have a significant impact on our lives points to a number of crucial trends in government. As well as increasing the speed and scale at which decisions can be made, algorithmic systems also change the way those decisions are made and the forms of public scrutiny that are possible.  This points to a shift in the government’s perspective of, and expectations for, accountability. Algorithmic systems are opaque and complex “black boxes” that enable powerful political decisions to be made based on mathematical calculations, in ways not always clearly tied to legal requirements.’

The lessons we all must learn from the A-levels algorithm debacle

WIRED 20.08.20

AI drives need to be more intelligently steered.  Government reliance on the systems is just plain lazy:

‘Political decisions in the development of the system and a basic misunderstanding of data have been blamed for the grading problems. Ultimately, the algorithm performed how it was designed and was the product of human-led decisions. “It's more about the process and the questions around goals, risk mitigation measures, appropriate scrutiny and redress,” says Jenny Brennan, who researches AI and technology’s impact on society at the Ada Lovelace Institute. Rather than the algorithm getting it wrong, Brennan argues it was simply the wrong algorithm.  But this is only the beginning. More algorithmic decision making and decision augmenting systems will be used in the coming years. Unlike the approach taken for A-levels, future systems may include opaque AI-led decision making. Despite such risks there remain no clear picture of how public sector bodies – government, local councils, police forces and more – are using algorithmic systems for decision making. What is known about their use is often piecemeal or complied by individual researchers.  The A-levels algorithm isn’t the first major public sector failure. In fact, it isn’t even the first this month. At the start of August, the Home Office dropped its “racist” visa decision algorithm that graded people on their nationalities following a legal challenge from non-profit group Foxglove. (The organisation had threatened a similar challenge to the A-levels system). And there have been continued problems with the system powering Universal Credit outcomes.‘

It’s not just A-levels – algorithms have a nightmarish new power over our lives

The Guardian 19.08.20

The UK exam fiasco reveals that bias is inherent in class structures:

‘The pandemic meant school-leavers did not get to sit their Highers or A-levels; instead, algorithms determined their grades – and their futures. A lot of kids from poorer backgrounds had their final results dramatically downgraded from teachers’ predictions; pupils at private schools, meanwhile, were treated remarkably well by the algorithms. After enormous controversy, the Scottish and UK governments performed U-turns, saying exam results would be based on teacher-assessed grades…  

Mysterious algorithms control increasingly large parts of our lives. They recommend what we should watch next on YouTube; they help employers recruit staff; they decide if you deserve a loan; they help landlords calculate rent. You and I may have aged beyond school exams, but it does not matter how old you are – these days, it is almost guaranteed that an opaque algorithm is grading and influencing your every move. If that does not give you nightmares, I am not sure what will.’

Artificial intelligence is a totalitarian’s dream – here’s how to take power back

The Conversation 12.08.20

Data should be decentralised, transparent and accessible:

‘Anyone who processes our data to create knowledge about us should be legally obliged to give us back that knowledge. We need to update the idea of “nothing about us without us” for the AI-age.   What AI tells us about ourselves is for us to consider using, not for others to profit from abusing. There should only ever be one hand on the tiller of our soul. And it should be ours.’

A British AI Tool to Predict Violent Crime Is Too Flawed to Use

WIRED 09.0820

Such a waste of money:

‘The prediction system, known as Most Serious Violence (MSV), is part of the UK's National Data Analytics Solution (NDAS) project. The Home Office has funded NDAS with at least £10 million ($13 million) during the past two years, with the aim to create machine learning systems that can be used across England and Wales.  As a result of the failure of MSV, police have stopped developing the prediction system in its current form. It has never been used for policing operations and has failed to get to a stage where it could be used. However, questions have also been raised around the violence tool’s potential to be biased toward minority groups and whether it would ever be useful for policing…  However, issues of bias and potential racism within AI systems used for decisionmaking is not new. Just this week the Home Office suspended its visa application decisionmaking system, which used a person’s nationality as one piece of information that determined their immigration status, after allegations that it contained “entrenched racism”.’

Police built an AI to predict violent crime. It was seriously flawed

WIRED 06.08.20

Leave it to algortihms and biases are inherited:

‘“A coding error was found in the definition of the training dataset which has rendered the current problem statement of MSV unviable,” a NDAS briefing published in March says. A spokesperson for NDAS says the error was a data ingestion problem that was discovered during the development process. No more specific information about the flaw has been disclosed. “It has proven unfeasible with data currently available, to identify a point of intervention before a person commits their first MSV offence with a gun or knife, with any degree of precision,” the NDAS briefing document states.’

The Panopticon Is Already Here

The Atlantic Sept Issue

It seems that the Chinese model of human control is super popular elsewhere in this flourishing age of dictatorships: 

‘The country is now the world’s leading seller of AI-powered surveillance equipment. In Malaysia, the government is working with Yitu, a Chinese AI start-up, to bring facial-recognition technology to Kuala Lumpur’s police as a complement to Alibaba’s City Brain platform. Chinese companies also bid to outfit every one of Singapore’s 110,000 lampposts with facial-recognition cameras.  In South Asia, the Chinese government has supplied surveillance equipment to Sri Lanka. On the old Silk Road, the Chinese company Dahua is lining the streets of Mongolia’s capital with AI-assisted surveillance cameras. Farther west, in Serbia, Huawei is helping set up a “safe-city system,” complete with facial-recognition cameras and joint patrols conducted by Serbian and Chinese police aimed at helping Chinese tourists to feel safe… Today, Kenya, Uganda, and Mauritius are outfitting major cities with Chinese-made surveillance networks… In Egypt, Chinese developers are looking to finance the construction of a new capital. It’s slated to run on a “smart city” platform similar to City Brain, although a vendor has not yet been named. In southern Africa, Zambia has agreed to buy more than $1 billion in telecom equipment from China, including internet-monitoring technology. China’s Hikvision, the world’s largest manufacturer of AI-enabled surveillance cameras, has an office in Johannesburg.’

DARPA Eyes More 2022 Funds To Improve AI Reliability

Breaking Defense 30.07.20

More money poured into AI, 5G and predictive algorithms:

‘Another area where DARPA is seeking to build more robust artificial intelligence is in relation to 5G wireless networks, which governments and militaries around the world are rushing to put in place to underpin the Internet of Things. “We’re doing some very, very interesting leading work — with  challenges actually — bringing AI to 5G technologies,” Highnam said…  Other priority areas for investment in 2021, Highnam said, are electronics, 5G, and space, as well as adding a bit more focus on directed energy and quantum science.’

Met uses software that can be deployed to see if ethnic groups 'specialise' in areas of crime

The Guardian 27.07.20

More racial profiling to add to discrimination:

‘The Origins programme, produced by Webber Phillips, a consultancy run by Prof Richard Webber and the former Equality and Human Rights Commission chair Trevor Phillips has been described by the former as conferring the ability “to profile perpetrators and victims” of crimes.  That has led to warnings that the software – which works by attempting to identify people’s ethnicity or cultural origin by their name – can facilitate stereotyping and stigmatisation…  The IRR said it wanted to know whether partner organisations including local authorities were consulted about the mapping. “As this information comes to light at a time when police and black community relations in the capital are extremely fraught, it’s inevitable that the Met’s use of demographic mapping will be viewed with suspicion and seen for what it is, racial profiling,” the spokesperson said.’

Prepare for Artificial Intelligence to Produce Less Wizardry

WIRED 11.07.20

Tech worshippers are hitting a snag with computational power:

‘“Deep neural networks are very computationally expensive,” says Song Han, an assistant professor at MIT who specializes in developing more efficient forms of deep learning and is not an author on Thompson’s paper. “This is a critical issue.”  Han’s group has created more efficient versions of popular AI algorithms using novel neural network architectures and specialized chip architectures, among other things. But he says there is a “still a long way to go” to make deep learning less compute-hungry… 

The head of Facebook’s AI research lab, Jerome Pesenti, told WIRED last year that AI researchers were starting to feel the effects of this computation crunch.  Thompson believes that, without clever new algorithms, the limits of deep learning could slow advances in multiple fields, affecting the rate at which computers replace human tasks. “The automation of jobs will probably happen more gradually than expected, since getting to human-level performance will be much more expensive than anticipated,” he says. “Slower automation might sound good from a jobs perspective,” he says, but it will also slow gains in productivity, which are key to raising living standards… This enormous rise in computation for AI also comes at an environmental cost, although in practice it can be difficult to measure the emissions produced by a project without details of on the efficiency of the computers. One recent study suggests that the energy consumption of data centers has grown little over the past decade due to efficiency improvements.’

AI 50 Founders Predict What Artificial Intelligence Will Look Like After Covid-19

Forbes 10.07.20

Some serious optimism from those who would gain the most:

‘With artificial intelligence becoming ubiquitous in our daily lives, DeepMap CEO James Wu believes people will abandon the common misconception that AI is a threat to humanity. “We will see a shift in public sentiment from ‘AI is dangerous’ to ‘AI makes the world safer,’” he says. “AI will become associated with safety while human contact will become associated with danger.”…  “As working remotely becomes the new normal across the business community, AI tools that enable employees to get location-agnostic, real-time tech support are becoming even more critical,” says Moveworks CEO Bhavin Shah, pointing to the addition of bots to people’s daily lives as one possibility for the future of work.’

Are we making spacecraft too autonomous?

Technology Review 03.07.20

The hype around AI is getting stronger everyday.  If only humans would consider this as a detriment, things may turn out better:

‘Nowadays, a few errors in over one million lines of code could spell the difference between mission success and mission failure. We saw that late last year, when Boeing’s Starliner capsule (the other vehicle NASA is counting on to send American astronauts into space) failed to make it to the ISS because of a glitch in its internal timer. A human pilot could have overridden the glitch that ended up burning Starliner’s thrusters prematurely. NASA administrator Jim Bridenstine remarked soon after Starliner’s problems arose: “Had we had an astronaut on board, we very well may be at the International Space Station right now.”   But it was later revealed that many other errors in the software had not been caught before launch, including one that could have led to the destruction of the spacecraft. And that was something human crew members could easily have overridden.’

Vestager warns against predictive policing in Artificial Intelligence

Euractiv 30.06.20

Sensible words:

‘Delivering a keynote speech as part of Tuesday’s (30 June) European AI Forum, Vestager reflected on the pros and cons of employing certain AI applications in Europe, highlighting the problems that could emerge as a result of an irresponsible application of next-generation technologies.  “If properly developed and used, it can work miracles, both for our economy and for our society,” Vestager said. “But artificial intelligence can also do harm,” she added, highlighting how some applications can lead to discrimination, amplifying prejudices and biases in society.  “Immigrants and people belonging to certain ethnic groups might be targeted by predictive policing techniques that direct all the attention of law enforcement to them. This is not acceptable”…  “Most of these contributors agreed that AI, if not properly framed, might compromise our fundamental rights or safety,” she said. “Many of them agreed with us that we should focus our attention on high-risk applications. But rather few of them were convinced that we had already found the silver bullet that allows us to distinguish between high and low-risk applications.”’

If AI is going to help us in a crisis, we need a new kind of ethics

Technology Review 24.06.20

The ‘AI Will Save Us’ clarion call has miserably failed.  Ethics and lack of algorithmic bias must be inherent in all algorithms:

‘The fact that we don’t have robust, practical processes for AI ethics makes things more difficult in a crisis scenario. But in times like this you also have greater need for transparency. People talk a lot about the lack of transparency with machine-learning systems as black boxes. But there is another kind of transparency, concerning how the systems are used.  This is especially important in a crisis, when governments and organizations are making urgent decisions that involve trade-offs. Whose health do you prioritize? How do you save lives without destroying the economy? If an AI is being used in public decision-making, transparency is more important than ever’.

Japan's Fujitsu brings hand washing AI to COVID-19 fight

We are now entering the theatre of the absurd:

‘Three months after the World Health Organization recommended singing “Happy Birthday” twice during hand washing to fight the coronavirus, Japan’s Fujitsu Ltd has developed an artificial intelligence monitor it says will ensure healthcare, hotel and food industry workers scrub properly…  Fujitsu’s AI checks whether people complete a Japanese health ministry six-step hand washing procedure that like guidelines issued by the WHO asks people to clean their palms, wash their thumbs, between fingers and around their wrists, and scrub their fingernails.   The AI can’t identify people from their hands, but it could be coupled with identity recognition technology so companies could keep track of employees’ washing habits, said Suzuki.’ 

 Mathematicians urge colleagues to boycott police work in wake of killings

Nature 19.06.20

Scientists add weight behind algorithmic bias problems:

‘A group of mathematicians in the United States has written a letter calling for their colleagues to stop collaborating with police because of the widely documented disparities in how US law-enforcement agencies treat people of different races and ethnicities. They concentrate their criticism on predictive policing, a maths-based technique aimed at stopping crime before it occurs.’

Elon Musk-backed OpenAI to release text tool it called dangerous

The Guardian 12.06.20

Writers and journalists face an artificial competitor:

‘OpenAI, the machine learning nonprofit co-founded by Elon Musk, has released its first commercial product: a rentable version of a text generation tool the organisation once deemed too dangerous to release.  Dubbed simply “the API”, the new service lets businesses directly access the most powerful version of GPT-3, OpenAI’s general purpose text generation AI.  The tool is already a more than capable writer…  

“We need to perform experimentation to find out what they can and can’t do,” said Jack Clark, the group’s head of policy, at the time. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”  Now, that fear has lessened somewhat, with almost a year of GPT-2 being available to the public. Still, the company says: “The field’s pace of progress means that there are frequently surprising new applications of AI, both positive and negative.  “We will terminate API access for obviously harmful use-cases, such as harassment, spam, radicalisation, or astroturfing [masking who is behind a message]. But we also know we can’t anticipate all of the possible consequences of this technology, so we are launching today in a private beta [test version] rather than general availability.”’

OpenAI’s Text Generator Is Going Commercial

WIRED 11.06.20

So basically, OpenAI is the Clearview of the written word.  With inherent bias:

‘OpenAI’s new text generators are trained using a collection of almost a trillion words gathered from the web and digitized books, on a supercomputer with hundreds of thousands of processors the company paid Microsoft to build, effectively returning some of the company’s $1 billion investment to its source…  Despite that early interest, OpenAI’s leaders freely admit that it’s far from clear how widely useful this new model of AI programming can be.  One unknown is its reliability. “These models are somewhat unpredictable,” says Robert Dale, of consultants Language Technology Group. OpenAI’s software can recreate the patterns of text but doesn’t have a commonsense understanding of the world. Its versatility can be a liability as well as an asset. Occasional clangers are of little consequence for some uses, such as predictive text, but could be deal breakers in others, such as a customer support chatbot.’

The messy, secretive reality behind OpenAI’s bid to save the world

Technology Review 17.02.20

OpenAI looks set to be a for-profit ClosedAI.  An in-depth look at the company from a February article:

‘One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns weren’t allowed to attend.’

The activist dismantling racist police algorithms

Technology Review 05.06.20

Hamid Khan talks about the new ‘minority report’ which started in the UK three decades ago:

‘Person-based predictive policing claimed that for individuals who are called “persons of interest” or “habitual offenders,” who may have had some history in the past, we could use a risk assessment tool to establish that they were going to recidivate. So it was a numbers game. If they had any gun possession in the past, they were assigned five points. If they were on parole or probation, they were assigned five points. If they were gang-affiliated, they were assigned five points. If they’d had interactions with the police like a stop-and-frisk, they would be assigned one point. And this became where individuals who were on parole or probation or minding their own business and rebuilding their lives were then placed in what became known as a Chronic Offender Program, unbeknownst to many people.  

Then, based on this risk assessment, where Palantir is processing all the data, the LAPD created a list. They  started releasing bulletins, which were like a Most Wanted poster with these individuals’ photos, addresses, and history as well, and put them in patrol cars. [They] started deploying license plate readers, the stingray, the IMSI-Catcher, CCTV, and various other tech to track their movements, and then creating conditions on the ground to stop and to harass and intimidate them. We built a lot of grassroots power, and in April 2019 Operation Laser was formally dismantled. It was discontinued… 

Algorithms have no place in policing. I think it’s crucial that we understand that there are lives at stake. This language of location-based policing is by itself a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed.

[When asked this question:] ‘What are the human rights considerations when it comes to police technology and surveillance?

The first human right would be to stop being experimented on. I’m a human, and I am not here that you just unpack me and just start experimenting on me and then package me. There’s so much datafication of our lives that has happened. From plantation capitalism to racialized capitalism to now surveillance capitalism as well, we are subject to being bought and sold. Our minds and our thoughts have been commodified. It has a dumbing-down effect as well on our creativity as human beings, as a part of a natural universe. Consent is being manufactured out of us.’

This startup is using AI to give workers a “productivity score”

Technology Review 04.06.20

Could the West please stop criticising China?  Technocratic tendencies are becoming prevalent everywhere:

‘Companies have asked remote workers to install a whole range of such tools. Hubstaff is software that records users’ keyboard strokes, mouse movements, and the websites that they visit. Time Doctor goes further, taking videos of users’ screens. It can also take a picture via webcam every 10 minutes to check that employees are at their computer. And Isaak, a tool made by UK firm Status Today, monitors interactions between employees to identify who collaborates more, combining this data with information from personnel files to identify individuals who are “change-makers.” Now, one firm wants to take things even further. It is developing machine-learning software to measure how quickly employees complete different tasks and suggest ways to speed them up. The tool also gives each person a productivity score, which managers can use to identify those employees who are most worth retaining—and those who are not…  

Critics argue that workplace surveillance undermines trust and damages morale. Workers’ rights groups say that such systems should only be installed after consulting employees. “It can create a massive power imbalance between workers and the management,” says Cori Crider, a UK-based lawyer and cofounder of Foxglove, a nonprofit legal firm that works to stop governments and big companies from misusing technology. “And the workers have less ability to hold management to account.”  Whatever your views, this kind of software is here to stay—in part because remote work is normalizing it. “I think workplace monitoring is going to become mainstream,” says Tommy Weir, CEO of Enaible, the startup based in Boston that is developing the new monitoring software. “In the next six to 12 months it will become so pervasive it disappears.”’

Microsoft sacks journalists to replace them with robots

The Guardian 30.05.20

Algorithm take-over:

‘Dozens of journalists have been sacked after Microsoft decided to replace them with artificial intelligence software…  A spokesperson for the company said: “We are in the process of winding down the Microsoft team working at PA, and we are doing everything we can to support the individuals concerned. We are proud of the work we have done with Microsoft and know we delivered a high-quality service.”  A Microsoft spokesperson said: “Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic.”

Many tech companies are experimenting with uses for Artificial Intelligence in journalism, with the likes of Google funding investment in projects to understand its uses, although efforts to automate the writing of articles have not been adopted widely.’

If We’re Not Careful, Tech Could Hurt the Fight against COVID-19

Scientific American 18.05.20

Scientific American weighs in on how AI could work:

‘Ownership of data is a form of power: Do you provide meaningful opt-in to data collection? Whom are you giving access to this data? Do you inform users exactly what the service will and will not do and enforce those commitments with privacy-by-design principles and data governance? Many people are scared and willing to trust technology more than usual; we must hold ourselves accountable to this trust. Moreover, some solutions enable governments to expand mass surveillance or otherwise expand their power. This is especially insidious because, once created, government powers rarely go away. Reflect on who will have access to your technology and whether it will help vulnerable people or compound circumstances already stacked against them.’

Phrenology is back, wrapped up with facial recognition in a 21st century pre-crime package by university researchers. Too soon?

RT 15.05.20

On AI being able to predict who the baddies are:

‘That statement had been pulled by Thursday after controversy erupted over what critics slammed as an attempt to rehabilitate phrenology, eugenics, and other racist pseudosciences for the modern surveillance state. But amid the repulsion was an undeniable fascination - fellow facial recognition researcher Michael Petrov of EyeLock observed that he’d “never seen a study more audaciously wrong and still thought provoking than this.”’

'Dangerous' AI generates words that don't exist 

The Independent 14.05.20

Having decided to release it after all, it can only add to more bots spreading more disinformation.  Basically the new deep fakes of the written word:

ThisWordDoesNotExist.com generates new words such as “wacamole” (a single serving of waffle batter made with a sweet cornmeal mixture), “pileset” (form a mass of, or make a shape about, something), or “prayman” (the principal or leading men in a society or enterprise).  Users click a button on the site, and a new word is made.  The website was developed by San Francisco-based developer Thomas Dimson, an engineer who used to work for the Facebook-owned Instagram developing its recommendations algorithm…  The actual artificial intelligence that creates new words is based on the natural language processing algorithm Transformers and the language framework GPT-2 - an algorithm that can be fed a piece of text and use the information to predict the words that can come next and create writing that can be near-indistinguishable from that written by a human.  GPT-2 gained notoriety for being “too dangerous to release” but the researchers have since made it available for use.'

Naomi Klein: How big tech plans to profit from the pandemic

The Guardian 13.05.20

Naomi Klein on new technology’s usurping of fundamental societal needs:

‘Tech provides us with powerful tools, but not every solution is technological. And the trouble with outsourcing key decisions about how to “reimagine” our states and cities to men such as Bill Gates and Schmidt is that they have spent their lives demonstrating the belief that there is no problem that technology cannot fix.  For them, and many others in Silicon Valley, the pandemic is a golden opportunity to receive not just the gratitude, but the deference and power that they feel has been unjustly denied.’

Facebook’s AI is still largely baffled by covid misinformation

Technology Review 12.05.20

On AI’s many limits:

‘The challenge reveals the limitations of AI-based content moderation. Such systems can detect content similar to what they’ve seen before, but they founder when new kinds of misinformation appear. In recent years, Facebook has invested heavily in developing AI systems that can adapt more quickly, but the problem is not just the company’s: it remains one of the biggest research challenges in the field.’

Watch This AI Algorithm Change Humans into Animorphs (VIDEO

VICE 01.05.20

From deep fakes to animal morphing:

‘To create what he calls "humanimals" but what can all clearly see are animorphs, Steenbrugge said he used a dataset of 15,000 HD animal faces that was published with a new generative model, StarGAN v2. He used that dataset combined with a set of human faces as the training data, and ran it through a different model, StyleGAN v2.  The result is this inter-species AI dream, where a woman becomes an eerie cat-person that becomes a man-leopard with aviator glasses. It's all very Zootopia.'

A.I. can’t solve this: The coronavirus could be highlighting just how overhyped the industry is

CNBC 29.04.20

AI has been overhyped:

‘“It’s fascinating how quiet it is,” said Neil Lawrence, the former director of machine learning at Amazon Cambridge.  “This (pandemic) is showing what bulls--t most AI hype is. It’s great and it will be useful one day but it’s not surprising in a pandemic that we fall back on tried and tested techniques.”

AI healthcare shouldn’t get more trust than self-driving car; it’s useful amid pandemic emergency, but speed is not always good

RT 27.04.20

An acceleration of AI adoption may mean more deaths:

‘The biggest impediment to AI’s widespread adoption remains the public’s hesitation to embrace an increasingly controversial technology. And for good reason. Being diagnosed by a machine or a computer interface is not going to build trust.  Dehumanising healthcare through AI will be resisted for a very good reason: doctors are human, computers are not. Healthcare is not an exact science and much of it cannot be reduced to algorithmic certainty. Instincts and experience are more important.’

UK spies will need artificial intelligence - Rusi report

BBC 27.04.20

More AI-generated data here to stay: 

‘The independent report was commissioned by the UK's GCHQ security service, and had access to much of the country's intelligence community.  All three of the UK's intelligence agencies have made the use of technology and data a priority for the future - and the new head of MI5, Ken McCallum, who takes over this week, has said one of his priorities will be to make greater use of technology, including machine learning.   However, the authors believe that AI will be of only "limited value" in "predictive intelligence" in fields such as counter-terrorism… 

The authors are also wary of some of the hype around AI, and of talk that it will soon be transformative.  Instead, they believe we will see the incremental augmentation of existing processes rather than the arrival of novel futuristic capabilities.’

And from the report itself:

‘Increased adoption of Internet of Things (IoT) technology,70 the emergence of ‘smart cities’ and interconnected critical national infrastructure will create numerous new vulnerabilities which could be exploited by threat actors to cause damage or disruption. While these potential physical threats are yet to materialise, this situation could change rapidly, requiring government agencies to formulate proactive approaches to prevent and disrupt AI-enabled security threats before they develop.’ 

With humans vulnerable: How about a digital helper?

BBC 24.04.20

Social distancing and isolation pave the way to the integration of AI-enabled robots:

'Since February, California-based manufacturer CloudMinds has shipped more than 100 robots to China.  Many of those have gone to hospitals, where the XR-1 provides information to patients and helps guide visitors to the right department.   The artificial intelligence (AI) incorporated into the machines means they can operate on their own. They also are connected to the latest 5G mobile phone networks, which means they can react very quickly.   "The fast speeds and wide reach of 5G networks make them ideal for XR-1, which interacts by talking, gesturing, dancing and physically guiding people," says CloudMinds president Karl Zhao.  According to Wuhan Wuchang Field Hospital dean Wan Jun, they have been helpful. "CloudMinds robots' contactless operation and reliability supported the field hospital through a difficult time," he says.’

A guide to healthy skepticism of artificial intelligence and coronavirus

Brookings Institute 02.04.20

The hype does not live up to its claims:

‘Like many tools, AI has a role to play, but its effect on the outbreak is probably small. While this may change in the future, technologies like data reporting, telemedicine, and conventional diagnostic tools are currently far more impactful than AI… Despite all the talk of algorithms and big data, deciding what to predict and how to frame those predictions is frequently the most challenging aspect of applying AI. Effectively predicting a badly defined problem is worse than doing nothing at all… 

If not carefully managed, an AI algorithm will go to extraordinary lengths to find patterns in data that are associated with the outcome it is trying to predict. However, these patterns may be totally nonsensical and only appear to work during development. In fact, an inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world. That Alibaba claims its model works that well without caveat or self-criticism is suspicious on its face.’

AI can’t predict how a child’s life will turn out even with a ton of data

Technology Review 02.04.20

Even a multitude of data can’t predict an outcome:

‘In recent years, though, they have increasingly relied upon machine learning, which promises to produce far more precise predictions by crunching far greater amounts of data. Such models are now used to predict the likelihood that a defendant might be arrested for a second crime, or that a kid is at risk for abuse and neglect at home. The assumption is that an algorithm fed with enough data about a given situation will make more accurate predictions than a human or a more basic statistical analysis… 

Now a new study published in the Proceedings of the National Academy of Sciences casts doubt on how effective this approach really is. Three sociologists at Princeton University asked hundreds of researchers to predict six life outcomes for children, parents, and households using nearly 13,000 data points on over 4,000 families. None of the researchers got even close to a reasonable level of accuracy, regardless of whether they used simple statistics or cutting-edge machine learning.’

Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies

BMJ 25.03.20


The much-touted superiority of AI ‘doctors’ is nothing but PR:


‘Few prospective deep learning studies and randomised trials exist in medical imaging. Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards. Data and code availability are lacking in most studies, and human comparator groups are often small. Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions.’


This AI camera detects people who may have COVID-19

Fast Company 19.03.20

There will be a ubiquitous display of such tech.  They will be hailed as ‘saviours’:

‘We’ll have to see how that goes. But Athena Security, a small tech company located far from Silicon Valley, has quickly brought to market a cutting-edge technology that could play a crucial role in slowing the spread of the virus. Let’s hope we see more examples of that from tech companies big and small in the coming weeks and months.’

AI could help with the next pandemic—but not with this one

Technology Review 12.03.20

So for ‘tech’ to ‘save us’, what is the compromise?

Pactera Edge, a data and AI consultancy, says prediction tools would be a lot better if public health data wasn’t locked away within government agencies as it is in many countries, including the US. This means an AI must lean more heavily on readily available data like online news. “By the time the media picks up on a potentially new medical condition, it is already too late,” he says.

But if AI needs much more data from reliable sources to be useful in this area, strategies for getting it can be controversial. Several people I spoke to highlighted this uncomfortable trade-off: to get better predictions from machine learning, we need to share more of our personal data with companies and governments… 

In the meantime, the ultimate barrier may be the people in charge. “What I’d most like to change is the relationship between policymakers and AI,” says Wang. AI will not be able to predict disease outbreaks by itself, no matter how much data it gets. Getting leaders in government, businesses, and health care to trust these tools will fundamentally change how quickly we can react to disease outbreaks, he says. But that trust needs to come from a realistic view of what AI can and cannot do now—and what might make it better next time.

Making the most of AI will take a lot of data, time, and smart coordination between many different people. All of which are in short supply right now.’

Lie detectors have always been suspect. AI has made the problem worse.

Technology Review 13.03.20

Adding to the flaws of an unreliable system:

‘The belief that deception can be detected by analyzing the human body has become entrenched in modern life. Despite numerous studies questioning the validity of the polygraph, more than 2.5 million screenings are conducted with the device each year, and polygraph tests are a $2 billion industry. US federal government agencies including the Department of Justice, the Department of Defense, and the CIA all use the device when screening potential employees…

In reality, the psychological work that undergirds these new AI systems is even flimsier than the research underlying the polygraph. There is scant evidence that the results they produce can be trusted.’

This Small Company Is Turning Utah Into a Surveillance Panopticon

VICE 04.03.20

AI company gets access to all data to monitor anomalies:

‘The state of Utah has given an artificial intelligence company real-time access to state traffic cameras, CCTV and “public safety” cameras, 911 emergency systems, location data for state-owned vehicles, and other sensitive data… 

In its pitches to prospective clients, Banjo promises its technology, called "Live Time Intelligence," can identify, and potentially help police solve, an incredible variety of crimes in real-time. Banjo says its AI can help police solve child kidnapping cases “in seconds,” identify active shooter situations as they happen, or potentially send an alert when there's a traffic accident, airbag deployment, fire, or a car is driving the wrong way down the road.'

How China is using AI and big data to fight the coronavirus

Al Jazeera 01.03.20

FR and constant surveillance touted as the perfect protection for citizens, in this case, from coronavirus:

‘This epidemic has given the Chinese government a perfect excuse to drag out its massive surveillance system but such expansive data-collection has also created concerns among people who fear their privacy was severely compromised by this effort.

"Collecting personal data to control the outbreak should respect 'minimalism rule' and avoid excessive collecting," said Qiu Baochang, a Beijing-based lawyer who focuses on privacy law. "It's incredibly important to make sure no information is leaked and all collected data should be deleted after use.”

Mu, a resident of Chengdu who preferred to give one name, said: "I understand the rationale behind this decision [to track down possible virus carriers] because of this special situation we're going through. But there has to be a limit - it's becoming increasingly worrying how much information the government has on us.”'

High-risk Artificial Intelligence to be ‘certified, tested and controlled,’ Commission says

EURACTIV 19.02.20

Europe lays down some guidelines for the use of AI:

‘As part of the executive’s White paper on AI, a series of ‘high-risk’ technologies have been earmarked for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’

Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include such technologies with a risk of death, damage or injury, or with legal ramifications.

Artificial Intelligence technologies coming under those two categories will be obliged to abide by strict rules, which could include compliance tests and controls, the Commission said on Wednesday.

Sanctions could be imposed should certain technologies fail to meet such requirements. Such ‘high-risk’ technologies should also come “under human control,” according to Commission documents. Areas that are deemed to not be of high-risk, an option could be to introduce a voluntary labelling scheme, which would highlight the trustworthiness of an AI product by merit of the fact that it meets “certain objective and standardised EU-wide benchmarks.”’

The EU just released weakened guidelines for regulating artificial intelligence

Technology Review 19.02.20

The paper argues that the debate around AI has been watered down:

‘The new criteria are much weaker than the ones suggested in a version of the white paper leaked in January. That draft suggested a moratorium on facial recognition in public spaces for five years, while this one calls only for a “broad European debate” on facial recognition policy. Michael Veale, a digital policy lecturer at University College London, notes that the commission often takes more extreme positions in early drafts as a political tactic, so it’s not surprising that the official paper does not suggest a moratorium. However, he says it’s still disappointing because it comes on the heels of a similarly lackluster report from the High-Level Expert Group on Artificial Intelligence, which was considered “heavily captured by industry.”’ 

Workplaces are imperfect, but they don’t need Orwellian ‘thought police’ tech to enforce inclusivity

RT 19.02.20


Behaving to AI’s tune:


‘Far from helping to improve attitudes towards their jobs, deploying what amounts to Orwellian thought police machines across Corporate America will bring down the final curtain on freedom and liberty, killing off exactly what made capitalism work for so long in the first place, the active participation of millions of workers unafraid of speaking their minds.’

IEEE calls for standards to combat climate change and protect kids in the age of AI

IEEE 06.02.20

A step in the right direction:

‘“It is imperative to move beyond business as usual and to prioritize the well-being of our children, starting with protecting their privacy and security online. If we fail to do this, their agency, mental health, and self-actualization as humans in any culture will be reliant on forces beyond their control,” reads the report titled “Measuring What Matters in the Era of Global Warming and the Age of Algorithmic Promises.”'

Welfare surveillance system violates human rights, Dutch court rules

The Guardian 05.02.20


‘Campaigners say such “digital welfare states” – developed often without consultation, and operated secretively and without adequate oversight – amount to spying on the poor, breaching privacy and human rights norms and unfairly penalising the most vulnerable.
In the UK, where the government is accelerating the development of robots in the benefits system, the chairman of the House of Commons work and pensions select committee, Stephen Timms, said: “This ruling by the Dutch courts demonstrates that parliaments ought to look very closely at the ways in which governments use technology in the social security system, to protect the rights of their citizens.”’


From the pyramids to Apollo 11 – can AI ever rival human creativity?

The Conversation 05.02.20


A patent created by AI was rejected by the European Patent Office:


‘Where humans draw on a lifetime of broad experiences to create ideas from, machines are largely restricted to the data we feed them. Machines can quickly generate countless incremental innovations in forms of new versions based on the input data. Breakthrough innovation, however, is unlikely to come out of machines as it is often based on connecting fields that are distant or unconnected to each other. Think of the invention of the snowboard, which connects the worlds of skiing and surfing.’



EPIC Asks Federal Trade Commission To Regulate Use Of Artificial Intelligence In Pre-Employment Screenings


Forbes 03.02.20


AI discrimination in employment:


‘The Electronic Privacy Information Center (EPIC) has charged that HireVue, a leading provider of artificial intelligence-based pre-employment screenings, is flouting national and international standards of transparency, fairness and accountability.
.. The group claims the unregulated use of AI techniques has caused serious harm to consumers who are increasingly subject to opaque and un-provable decision-making in employment, credit, healthcare, housing, and criminal justice.
EPIC states that businesses “frequently fail to demonstrate that AI decision-making tools are accurate, reliable, or necessary—if businesses even disclose the existence of these tools to consumers in the first place”.’


Competing in the Age of AI

Harvard Business Review Jan-Feb Issue


‘While AI improvements will enrich many jobs and generate a variety of interesting opportunities, it seems inevitable that they will also cause widespread dislocation in many occupations. The dislocations will include not only job replacement but also the erosion of traditional capabilities. In almost every setting, AI-powered firms are taking on highly specialized organizations. In an AI-driven world, the requirements for competition have less to do with specialization and more to do with a universal set of capabilities in data sourcing, processing, analytics, and algorithm development…
Digital scale, scope, and learning create a slew of new challenges—not just privacy and cybersecurity problems, but social turbulence resulting from market concentration, dislocations, and increased inequality. The institutions designed to keep an eye on business—regulatory bodies, for example—are struggling to keep up with all the rapid change. In an AI-driven world, once an offering’s fit with a market is ensured, user numbers, engagement, and revenues can skyrocket. Yet it’s increasingly obvious that unconstrained growth is dangerous. The potential for businesses that embrace digital operating models is huge, but the capacity to inflict widespread harm needs to be explicitly considered. Navigating these opportunities and threats will be a real test of leadership for both businesses and public institutions.’


Artificial Intelligence may be life or death for you,' says Vestager as MEPs discuss regulation

Euronews 30.01.20


‘The thinking at this event is that AI applications are so powerful they have to ethical, accountable and transparent so we know not only what decisions are taken, but also how: "If I want to use AI to reduce the consumption of energy of a data center, if it's unexplainable or if it's untransparent, would I care about it if the consumption of energy goes down, and I don't think I'm violating any fundamental rights?" says Andrea Renda, Senior Fellow at the Centre for European Policy Studies. "If I'm using AI to decide who's going to get the next kidney, if there is only one available, I think that is a bit more controversial, right? And we might want to know how these decisions are taken.”’

Artificial Intelligence Will Do What We Ask. That’s a Problem.

Quanta Magazine 30.01.20


‘What’s to stop a robot from working to satisfy its evil owner’s nefarious ends? AI systems tend to find ways around prohibitions just as wealthy people find loopholes in tax laws, so simply forbidding them from committing crimes probably won’t be successful. Or, to get even darker: What if we all are kind of bad? YouTube has struggled to fix its recommendation algorithm, which is, after all, picking up on ubiquitous human impulses. Still, Russell feels optimistic. Although more algorithms and game theory research are needed, he said his gut feeling is that harmful preferences could be successfully down-weighted by programmers — and that the same approach could even be useful “in the way we bring up children and educate people and so on.” In other words, in teaching robots to be good, we might find a way to teach ourselves. He added, “I feel like this is an opportunity, perhaps, to lead things in the right direction.”’

The battle for ethical AI at the world’s biggest machine-learning conference 

Nature 24.1.20

Debate on AI ethics:

‘Tech companies — which are responsible for vast amounts of AI research — are also addressing the ethics of their work (Google alone was responsible for 12% of papers at NeurIPS, according to one estimate). But activists say that they must not be allowed to get away with ‘ethics-washing’. Tech companies suffer from a lack of diversity, and although some firms have staff and entire boards dedicated to ethics campaigners warn that these often have too little power. Their technical solutions — which include efforts to ‘debias algorithms’ — are also often misguided, says Birhane. The approach wrongly suggests that bias-free data sets exist, and fixing algorithms doesn’t solve the root problems in underlying data, she says.’ 

2030: from technology optimism to technology realism

World Economic Forum 20.01.20

From Davos 2020:

‘We are entering a new era where powerful Fourth Industrial Revolution technologies like artificial intelligence (AI) are being infused at exponential speed into the world around us. As organizations and countries race to harness these new technologies to spur growth and competitiveness, we stand at a critical juncture to put these technologies to work in a responsible way for people and the planet.’

‘Google and Microsoft shouldn’t decide how technology is regulated

Fast Company 23.01.20

‘ Google and Microsoft are right that it’s time for government to step in and provide safeguards, and that regulation should build on the important thinking that’s already been done. However, looking only to the perspectives of large tech companies, who’ve already established themselves as dominant players, is asking the fox for guidance on henhouse security procedures. We need to take a broader view…  AI principles are a map that should be on the table as regulators around the world draw up their next steps. However, even a perfect map doesn’t make the journey for you. At some point—and soon—policymakers need to set out the real-world implementations that will ensure that the power of AI technology reinforces the best, and not the worst, in humanity.’

Lack of guidance leaves public services in limbo on AI, says watchdog

The Guardian 29.12.19

UK watchdog on lack of regulations:

‘Lord Evans, a former MI5 chief, told the Sunday Telegraph that “it was very difficult to find out where AI is being used in the public sector” and that “at the very minimum, it should be visible, and declared, where it has the potential for impacting on civil liberties and human rights and freedoms”.’

Researchers: Are we on the cusp of an ‘AI winter’?

BBC 12.01.20

A reckoning in store for AI development:

‘"In the next decade, I hope we'll see a more measured, realistic view of AI's capability, rather than the hype we've seen so far," said Catherine Breslin, an ex-Amazon AI researcher.  The term "AI" became a real buzzword through the last decade, with companies of all shapes and sizes latching onto the term, often for marketing purposes.  "The manifold of things which were lumped into the term "AI" will be recognised and discussed separately," said Samim Winiger, a former AI researcher at Google in Berlin.  "What we called 'AI' or 'machine learning' during the past 10-20 years, will be seen as just yet another form of ‘computation'"‘.

The US just released 10 principles that it hopes will make AI safer

Technology Review 07.01.20

AI regulations in the US:

‘Done well, it would encourage agencies to hire more personnel with technical expertise, create definitions and standards for trustworthy AI, and lead to more thoughtful regulation in general. Done poorly, it could give agencies incentives to skirt around the requirements or put up bureaucratic roadblocks to the regulations necessary for ensuring trustworthy AI .’

An AI payout? Should companies remunerate society for lost jobs?

ZDNET 06.01.20

How to compensate loss of jobs derived from AI adoption:

‘The point is to mitigate the deleterious effects of AI. Because while AI may increase aggregate wealth for society, "many have argued that AI could lead to a substantial lowering of wages, job displacement, and even large-scale elimination of employment opportunities as the structure of the economy changes productivity," the authors write….

A question not addressed by the paper is whether it creates a way for companies to buy their way out of big ethical issues. In other words, would paying out profits be a way for companies, and society, to stop thinking critically about the impact of AI on jobs? And would it be a way to avoid regulation? That doesn't seem to be the authors' intent, but it's worth keeping in mind as a potential issue, just like any corporate attempt to resolve ethical issues through the marketplace.  Another question, more intriguing intellectually, is whether the whole matter of discounted future rewards and costs can be gamified by the companies that will themselves earn the profits, or may earn them. In theory, a company such as DeepMind, a unit of Google, could simulate future rewards as if it were a reinforcement learning problem like that of AlphaZero.’

Make no mistake: Military robots are not there to preserve human life, they are there to allow even more endless wars

RT 05.01.20

US military and killer robots:

‘The Fort Benning experiment already hints at a dystopian future of endless war, but the tech used to win those simulated skirmishes is positively quaint compared to other technology in the pipeline. Autonomous killer robots – bots that select and kill their own targets using artificial intelligence – are the logical endpoint of the mission to dehumanize war completely, filling the ranks with soldiers that won’t ask questions, won’t talk back, and won’t hesitate to shoot whoever they’re told to shoot. The pitfalls to such a technology are obvious. If AI can’t be trained to distinguish between sarcasm and normal speech, or between a convicted felon and a congressman, how can it be trusted to reliably distinguish between civilian and soldier? Or even between friend and foe?’ 

Google, Facebook, Neuralink Sued for Weaponized AI Tech Transfer, Complicity to Genocide in China and Endangering Humanity with Misuse of AI 

The AI Organisation involved in suing big tech companies.

A RUSSIAN STARTUP IS SELLING ROBOT CLONES OF REAL PEOPLE

Futurism 02.11.19

Russian company producing robots looking like real people: 

‘“Everyone will now be able to order a robot with any appearance — for professional or personal use,” Aleksei Iuzhakov, Chairman of Promobot’s Board of Directors, said in a press release, later encouraging people to “imagine a replica of Michael Jordan selling basketball uniforms and William Shakespeare reading his own texts in a museum.”  Promobot told CNBC it’s already taking orders for Robo-Cs and has started building four robot clones.One of the bots will be stationed in a government service center where it will perform several functions, including passport scans. Another will be a clone of Albert Einstein for a robot exhibition.’

5G – When has absolute power not corrupted absolutely? – The China manoeuvre

References article on 5G and Chinesifying the world from 5G-EMF:

‘China has been building what it calls “the world’s biggest camera surveillance network”. Across the country, 170 million CCTV cameras are already in place and an estimated 400 million new ones will be installed in the next three years.  Many of the cameras are fitted with artificial intelligence, including facial recognition technology.’

AI expert calls for end to UK use of ‘racially biased’ algorithms

The Guardian 12.12.19

Bias in AI:

‘An expert on artificial intelligence has called for all algorithms that make life-changing decisions – in areas from job applications to immigration into the UK – to be halted immediately.  Prof Noel Sharkey, who is also a leading figure in a global campaign against “killer robots”, said algorithms were so “infected with biases” that their decision-making processes could not be fair or trusted.’

Robotics researchers have a duty to prevent autonomous weapons

The Conversation 04.12.19

Ways to discourage engineers to work on autonomous weapons:

‘About a year ago, a group of researchers in artificial intelligence and autonomous robotics put forward a pledge to refrain from developing lethal autonomous weapons. They defined lethal autonomous weapons as platforms that are capable of “selecting and engaging targets without human intervention.” As a robotics researcher who isn’t interested in developing autonomous targeting techniques, I felt that the pledge missed the crux of the danger. It glossed over important ethical questions that need to be addressed, especially those at the broad intersection of drone applications that could be either benign or violent.’ 

DARPA is betting on AI to bring the next generation of wireless devices online

Technology Review 25.10.19

Spectrum bands to be allocated by AI:

’DARPA asked engineers and researchers to design a new type of communication device that doesn’t broadcast on the same frequency every time. Instead, it uses a machine-learning algorithm to find the frequencies that are immediately available, and different devices’ algorithms work together to optimize spectrum use. Rather than being distributed permanently to single, exclusive owners, spectrum is allocated dynamically and automatically in real time.  “We need to put the world of spectrum management onto a different technological base,” says Paul Tilghman, a program manager at DARPA, “and really move from a system today that is largely managed by people with pen and paper to a system that’s largely managed by machines autonomously—at machine time scales.”’

Climate Change and Technology

Counterpunch 27.09.19

Technology is not the answer:

‘“Advocating for more technology is fundamentally different from claiming that ‘the problem of technology’ has been solved. In fact, even amongst leading academics at leading universities, the ‘problem of technology’ appears never to have occurred. It’s fine, A-Okay, etc. if ‘rigorous’ scientific analyses take it into account and decide it isn’t a factor. But that isn’t the case. They are rigorously ignorant of basic questions about what it is that technology actually ‘does.”’

We can’t trust AI systems built on deep learning alone

Technology Review 27.09.19

Limitation of AI deep learning:

‘I think that we’re living in a weird moment in history where we are giving a lot of trust to software that doesn’t deserve that trust. I think that the worries that we have now are not permanent. A hundred years from now, AI will warrant our trust—and maybe sooner.  But right now AI is dangerous, and not in the way that Elon Musk is worried about. But in the way of job interview systems that discriminate against women no matter what the programmers do because the techniques that they use are too unsophisticated.’ 

Pope urges Silicon Valley to avoid a new 'barbarism' with tech like artificial intelligence

NBC 27.09.19

Pope Francis warns against AI:

‘Francis said technology needed "both theoretical and practical moral principles”.  He warned of the dangers of the use of artificial intelligence "to circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence"‘.

Digital dystopia: how algorithms punish the poor

The Guardian 14.10.19

‘Their dispatches reveal how unemployment benefits, child support, housing and food subsidies and much more are being scrambled online. Vast sums are being spent by governments across the industrialised and developing worlds on automating poverty and in the process, turning the needs of vulnerable citizens into numbers, replacing the judgment of human caseworkers with the cold, bloodless decision-making of machines.  At its most forbidding, Guardian reporters paint a picture of a 21st-century Dickensian dystopia that is taking shape with breakneck speed. The American political scientist Virginia Eubanks has a phrase for it: “The digital poorhouse.”’