Zürcher Nachrichten - Firms and researchers at odds over superhuman AI

EUR -
AED 4.328809
AFN 82.32194
ALL 97.94282
AMD 453.235397
ANG 2.109145
AOA 1080.722757
ARS 1448.704312
AUD 1.793086
AWG 2.121376
AZN 2.031031
BAM 1.954659
BBD 2.379911
BDT 144.595584
BGN 1.954441
BHD 0.444297
BIF 3511.350038
BMD 1.178542
BND 1.500824
BOB 8.145245
BRL 6.397954
BSD 1.178712
BTN 100.500327
BWP 15.57091
BYN 3.857448
BYR 23099.422319
BZD 2.367718
CAD 1.601314
CDF 3400.09344
CHF 0.935026
CLF 0.028472
CLP 1092.31966
CNY 8.444725
CNH 8.438142
COP 4707.70941
CRC 595.052602
CUC 1.178542
CUP 31.231362
CVE 110.200664
CZK 24.651326
DJF 209.89746
DKK 7.461714
DOP 70.458865
DZD 152.874592
EGP 58.170709
ERN 17.678129
ETB 162.592277
FJD 2.634868
FKP 0.865172
GBP 0.862964
GEL 3.205274
GGP 0.865172
GHS 12.199878
GIP 0.865172
GMD 84.268441
GNF 10220.033439
GTQ 9.062709
GYD 246.606029
HKD 9.25149
HNL 30.796021
HRK 7.532029
HTG 154.769644
HUF 399.611163
IDR 19116.480855
ILS 3.955629
IMP 0.865172
INR 100.612561
IQD 1544.098539
IRR 49646.07983
ISK 142.414863
JEP 0.865172
JMD 188.310063
JOD 0.835601
JPY 169.552703
KES 152.303394
KGS 103.063387
KHR 4731.237855
KMF 492.63034
KPW 1060.719093
KRW 1605.021252
KWD 0.35962
KYD 0.982327
KZT 612.462438
LAK 25398.172285
LBP 105612.942075
LKR 353.623551
LRD 236.332027
LSL 20.643348
LTL 3.479928
LVL 0.712888
LYD 6.347294
MAD 10.575626
MDL 19.849412
MGA 5176.977627
MKD 61.504051
MMK 2474.002727
MNT 4225.405565
MOP 9.530036
MRU 46.747708
MUR 52.880919
MVR 18.14229
MWK 2044.006688
MXN 22.146442
MYR 4.976963
MZN 75.379359
NAD 20.643348
NGN 1801.519941
NIO 43.374677
NOK 11.877523
NPR 160.800123
NZD 1.942211
OMR 0.453159
PAB 1.178712
PEN 4.197949
PGK 4.866159
PHP 66.38768
PKR 334.497117
PLN 4.249392
PYG 9397.513635
QAR 4.295592
RON 5.060894
RSD 117.185988
RUB 92.898619
RWF 1693.211486
SAR 4.419782
SBD 9.825415
SCR 17.01932
SDG 707.691851
SEK 11.253896
SGD 1.500555
SHP 0.926149
SLE 26.45788
SLL 24713.439845
SOS 673.606741
SRD 44.012627
STD 24393.439003
SVC 10.313979
SYP 15323.143366
SZL 20.652943
THB 38.157066
TJS 11.427628
TMT 4.136682
TND 3.429098
TOP 2.760261
TRY 46.977858
TTD 7.986337
TWD 34.066105
TZS 3107.271779
UAH 49.216967
UGX 4228.53134
USD 1.178542
UYU 47.222431
UZS 14842.334892
VES 129.019113
VND 30877.799222
VUV 140.18651
WST 3.066014
XAF 655.574269
XAG 0.031993
XAU 0.000352
XCD 3.185068
XDR 0.814826
XOF 655.574269
XPF 119.331742
YER 285.383588
ZAR 20.651705
ZMK 10608.311781
ZMW 28.436399
ZWL 379.490029
  • CMSC

    0.0900

    22.314

    +0.4%

  • CMSD

    0.0250

    22.285

    +0.11%

  • RBGPF

    0.0000

    69.04

    0%

  • SCS

    0.0400

    10.74

    +0.37%

  • RELX

    0.0300

    53

    +0.06%

  • RIO

    -0.1400

    59.33

    -0.24%

  • GSK

    0.1300

    41.45

    +0.31%

  • NGG

    0.2700

    71.48

    +0.38%

  • BP

    0.1750

    30.4

    +0.58%

  • BTI

    0.7150

    48.215

    +1.48%

  • BCC

    0.7900

    91.02

    +0.87%

  • JRI

    0.0200

    13.13

    +0.15%

  • VOD

    0.0100

    9.85

    +0.1%

  • BCE

    -0.0600

    22.445

    -0.27%

  • RYCEF

    0.1000

    12

    +0.83%

  • AZN

    -0.1200

    73.71

    -0.16%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

W.Odermatt--NZN