Zürcher Nachrichten - Firms and researchers at odds over superhuman AI

EUR -
AED 4.35335
AFN 77.050797
ALL 96.614026
AMD 452.873985
ANG 2.121943
AOA 1087.00321
ARS 1723.800654
AUD 1.702936
AWG 2.136666
AZN 2.019869
BAM 1.955248
BBD 2.406031
BDT 145.978765
BGN 1.990709
BHD 0.449191
BIF 3539.115218
BMD 1.18539
BND 1.512879
BOB 8.254703
BRL 6.231008
BSD 1.194568
BTN 109.699013
BWP 15.630651
BYN 3.402439
BYR 23233.647084
BZD 2.402531
CAD 1.615035
CDF 2684.909135
CHF 0.915881
CLF 0.026011
CLP 1027.058063
CNY 8.240537
CNH 8.248946
COP 4354.94563
CRC 591.535401
CUC 1.18539
CUP 31.412839
CVE 110.234327
CZK 24.334287
DJF 212.720809
DKK 7.470097
DOP 74.383698
DZD 153.702477
EGP 55.903178
ERN 17.780852
ETB 185.572763
FJD 2.613371
FKP 0.863571
GBP 0.865754
GEL 3.194674
GGP 0.863571
GHS 12.974143
GIP 0.863571
GMD 86.533903
GNF 10372.164298
GTQ 9.16245
GYD 249.920458
HKD 9.257838
HNL 31.365884
HRK 7.536597
HTG 156.336498
HUF 381.328619
IDR 19883.141804
ILS 3.663335
IMP 0.863571
INR 108.679593
IQD 1553.453801
IRR 49934.560565
ISK 144.985527
JEP 0.863571
JMD 187.197911
JOD 0.840489
JPY 183.433247
KES 152.915746
KGS 103.662825
KHR 4768.236408
KMF 491.93733
KPW 1066.928941
KRW 1719.752641
KWD 0.36382
KYD 0.995519
KZT 600.800289
LAK 25485.888797
LBP 101410.128375
LKR 369.427204
LRD 219.593979
LSL 19.132649
LTL 3.500149
LVL 0.717031
LYD 7.495914
MAD 10.835985
MDL 20.092409
MGA 5260.173275
MKD 61.631889
MMK 2489.287708
MNT 4228.659246
MOP 9.606327
MRU 47.30937
MUR 53.852723
MVR 18.32658
MWK 2059.023112
MXN 20.70407
MYR 4.672854
MZN 75.580924
NAD 18.967522
NGN 1643.520192
NIO 43.508231
NOK 11.437875
NPR 175.519161
NZD 1.96876
OMR 0.458133
PAB 1.194573
PEN 3.994177
PGK 5.066955
PHP 69.837307
PKR 331.998194
PLN 4.215189
PYG 8001.773454
QAR 4.316051
RON 5.097064
RSD 117.111851
RUB 90.544129
RWF 1742.915022
SAR 4.446506
SBD 9.544303
SCR 17.200951
SDG 713.016537
SEK 10.580086
SGD 1.505332
SHP 0.88935
SLE 28.834661
SLL 24857.038036
SOS 677.454816
SRD 45.104693
STD 24535.182964
STN 24.493185
SVC 10.452048
SYP 13109.911225
SZL 19.132635
THB 37.411351
TJS 11.151397
TMT 4.148866
TND 3.37248
TOP 2.854135
TRY 51.47818
TTD 8.110743
TWD 37.456003
TZS 3052.380052
UAH 51.199753
UGX 4270.811618
USD 1.18539
UYU 46.357101
UZS 14603.874776
VES 410.075543
VND 30749.020682
VUV 141.680176
WST 3.213481
XAF 655.774526
XAG 0.014004
XAU 0.000244
XCD 3.203577
XCG 2.153028
XDR 0.815573
XOF 655.774526
XPF 119.331742
YER 282.508153
ZAR 19.136335
ZMK 10669.938133
ZMW 23.443477
ZWL 381.695147
  • SCS

    0.0200

    16.14

    +0.12%

  • RBGPF

    1.3800

    83.78

    +1.65%

  • CMSD

    -0.0400

    24.05

    -0.17%

  • BCC

    0.5100

    80.81

    +0.63%

  • JRI

    0.1400

    13.08

    +1.07%

  • CMSC

    0.0500

    23.76

    +0.21%

  • RIO

    -4.1000

    91.03

    -4.5%

  • RELX

    -0.3700

    35.8

    -1.03%

  • BCE

    0.3700

    25.86

    +1.43%

  • NGG

    0.2000

    85.27

    +0.23%

  • GSK

    0.9400

    51.6

    +1.82%

  • BTI

    0.4600

    60.68

    +0.76%

  • VOD

    -0.0600

    14.65

    -0.41%

  • RYCEF

    -0.4300

    16

    -2.69%

  • BP

    -0.1600

    37.88

    -0.42%

  • AZN

    0.1800

    92.77

    +0.19%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

W.Odermatt--NZN