Zürcher Nachrichten - Firms and researchers at odds over superhuman AI

EUR -
AED 4.234647
AFN 72.643117
ALL 95.757309
AMD 435.408728
ANG 2.064091
AOA 1057.36486
ARS 1614.346342
AUD 1.657376
AWG 2.078408
AZN 1.958576
BAM 1.951805
BBD 2.325839
BDT 141.699943
BGN 1.970952
BHD 0.432714
BIF 3418.203011
BMD 1.15307
BND 1.476877
BOB 7.979562
BRL 6.142287
BSD 1.154836
BTN 107.960008
BWP 15.747244
BYN 3.503552
BYR 22600.165943
BZD 2.322546
CAD 1.583482
CDF 2623.233322
CHF 0.910977
CLF 0.02668
CLP 1053.47892
CNY 7.940499
CNH 7.975581
COP 4262.368236
CRC 539.395868
CUC 1.15307
CUP 30.556347
CVE 110.039751
CZK 24.519569
DJF 205.639061
DKK 7.471402
DOP 68.54968
DZD 151.575728
EGP 59.993636
ERN 17.296045
ETB 181.99598
FJD 2.553415
FKP 0.86425
GBP 0.867287
GEL 3.130599
GGP 0.86425
GHS 12.588232
GIP 0.86425
GMD 84.754467
GNF 10122.279909
GTQ 8.845893
GYD 241.602302
HKD 9.0294
HNL 30.56696
HRK 7.534383
HTG 151.499883
HUF 394.348104
IDR 19591.634159
ILS 3.620064
IMP 0.86425
INR 108.33689
IQD 1512.803324
IRR 1517007.312332
ISK 143.810774
JEP 0.86425
JMD 181.43176
JOD 0.817567
JPY 183.967079
KES 149.033754
KGS 100.833527
KHR 4614.554106
KMF 492.361081
KPW 1037.767304
KRW 1744.899987
KWD 0.353497
KYD 0.96233
KZT 555.193531
LAK 24798.023914
LBP 103421.202089
LKR 360.239473
LRD 211.327417
LSL 19.480655
LTL 3.404715
LVL 0.69748
LYD 7.392867
MAD 10.790871
MDL 20.11066
MGA 4815.289368
MKD 61.514082
MMK 2420.814966
MNT 4112.942181
MOP 9.321419
MRU 46.226376
MUR 53.69826
MVR 17.826655
MWK 2002.561585
MXN 20.74707
MYR 4.542518
MZN 73.682844
NAD 19.480823
NGN 1564.415464
NIO 42.493018
NOK 11.085554
NPR 172.734917
NZD 1.989824
OMR 0.440697
PAB 1.154821
PEN 3.992527
PGK 4.984796
PHP 69.617751
PKR 322.430976
PLN 4.281665
PYG 7542.56054
QAR 4.222856
RON 5.092994
RSD 117.210073
RUB 97.493633
RWF 1680.289628
SAR 4.329659
SBD 9.284125
SCR 15.845265
SDG 692.995016
SEK 10.832917
SGD 1.480346
SHP 0.865101
SLE 28.336616
SLL 24179.307368
SOS 659.960522
SRD 43.225694
STD 23866.214565
STN 24.449951
SVC 10.104317
SYP 127.488051
SZL 19.487785
THB 38.115291
TJS 11.091795
TMT 4.047275
TND 3.410619
TOP 2.776315
TRY 51.114334
TTD 7.834894
TWD 37.054472
TZS 2998.28211
UAH 50.591177
UGX 4365.064806
USD 1.15307
UYU 46.533738
UZS 14079.180219
VES 524.289984
VND 30370.702591
VUV 137.475997
WST 3.145334
XAF 654.628344
XAG 0.018232
XAU 0.000269
XCD 3.116229
XCG 2.081222
XDR 0.814158
XOF 654.617013
XPF 119.331742
YER 275.125069
ZAR 19.826569
ZMK 10379.012321
ZMW 22.547845
ZWL 371.28797
  • RIO

    -2.5000

    83.15

    -3.01%

  • RBGPF

    -13.5000

    69

    -19.57%

  • RELX

    -0.4600

    33.36

    -1.38%

  • NGG

    -3.5400

    81.99

    -4.32%

  • BTI

    -1.3500

    57.37

    -2.35%

  • CMSC

    -0.2000

    22.65

    -0.88%

  • GSK

    -0.5300

    51.84

    -1.02%

  • AZN

    -5.3300

    183.6

    -2.9%

  • VOD

    -0.0900

    14.33

    -0.63%

  • RYCEF

    -1.2600

    15.34

    -8.21%

  • CMSD

    -0.2420

    22.658

    -1.07%

  • BP

    -1.0800

    44.78

    -2.41%

  • BCE

    0.0600

    25.79

    +0.23%

  • BCC

    -1.5600

    68.3

    -2.28%

  • JRI

    -0.3900

    11.77

    -3.31%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

W.Odermatt--NZN