Impf_Info – Following Feed 20 Posts (gefiltert)

Reset
@Adriano_Mannino @NilsAlthaus RT von @Adriano_Mannino 01.04 12:50
Wenn Maschinen zu Akteuren werden, steht unsere Demokratie auf dem Spiel. @Adriano_Mannino und ich haben für den Sammelband "KI und Demokratie" das Kapitel "Künstliche Superintelligenz und das Ende der Demokratie" beigesteuert. Jetzt erhältlich!
@Adriano_Mannino @Lewis_Bollard RT von @Adriano_Mannino 17.10 17:07
A big new threat to animals and all of us who care about them: the EU may be about to break its promise to end cages for Europe's 200 million+ caged pigs, rabbits, and hens. This must be stopped. Here's the background: In 2021, the @EU_Commission promised to ban all cages and crates in response to a petition by 1.4M Europeans. It then asked its official scientific veterinary advisors, who agreed the cages are inhumane. And it surveyed Europeans, >85% of who backed the move. Then the factory farm lobby got to work. In 2023, they got the Commission to delay its promised cage ban because it needed more time for "consultation." The Commission then spent 2 years consulting with industry. At the end, it reiterated its promise to ban the cages. Over the last two years, the EU's Animal Welfare Commissioner @OliverVarhelyi has repeatedly promised that the Commission will propose a cage ban in 2026, next year. Even as he's refused to meet with more than one animal welfare group (he's met with dozens of industry groups), he's insisted that advocates have nothing to worry about -- his word can be trusted. Just a few weeks ago, the Commission opened a public consultation -- still open -- on how to ban the cages. Then, this week, Euractiv obtained a leak of the EU Commission's work plan of all its planned work for next year. It doesn't contain the promised cage ban at all. The rumor is that this was a last-minute deletion pushed by Varhelyi himself -- the Commissioner who's meant to be safeguarding animal welfare. If true, this is a shocking betrayal of Europe's animals -- and the >85% of Europeans who want better for them. The only reason to scrap this proposal is because animal ag lobbyists don't like it. And if it's true that Varhelyi is leading this, then only his bosses -- @RoxanaMinzatu and @vonderleyen can stop this. This is an all-hands on deck moment for European advocates. Contact your MEPs, your ministers of agriculture, and the Commission itself. Animals get betrayed like this because they can't lobby for themselves. We need to be their advocates.
@Adriano_Mannino @nearcyan RT von @Adriano_Mannino 18.04 00:30
reminder of how far AGI goalposts have moved
@Adriano_Mannino @Yoshua_Bengio RT von @Adriano_Mannino 18.09 22:19
I read California Governor @GavinNewsom's comments about SB1047 yesterday: “The governor said he is weighing what risks of AI are demonstrable versus hypothetical.” https://www.bloomberg.com/news/articles/2024-09-17/newsom-says-he-s-concerned-about-chilling-effect-of-ai-bill Here is my perspective on this: Although experts don’t all agree on the magnitude and timeline of the risks, they generally agree that as AI capabilities continue to advance, major public safety risks such as AI-enabled hacking, biological attacks, or society losing control over AI could emerge. Some reply to this: “None of these risks have materialized yet, so they are purely hypothetical”. But (1) AI is rapidly getting better at abilities that increase the likelihood of these risks, and (2) We should not wait for a major catastrophe before protecting the public. Many people at the AI frontier share this concern, but are locked in an unregulated rat race. Over 125 current & former employees of frontier AI companies have called on @CAGovernor to #SignSB1047. I sympathize with the Governor’s concerns about potential downsides of the bill. But the California lawmakers have done a good job at hearing many voices – including industry, which led to important improvements. SB 1047 is now a measured, middle-of-the-road bill. Basic regulation against large-scale harms is standard in all sectors that pose risks to public safety. Leading AI companies have publicly acknowledged the risks of frontier AI. They’ve made voluntary commitments to ensure safety, including to the White House. That’s why some of the industry resistance against SB 1047, which holds them accountable to those promises, is disheartening. AI can lead to anything from a fantastic future to catastrophe, and decision-makers today face a difficult test. To keep the public safe while AI advances at unpredictable speed, they have to take this vast range of plausible scenarios seriously and take responsibility. AI can bring tremendous benefits – but only if we steer it wisely, instead of just letting it happen to us and hoping that all goes well. I often wonder: Will we live up to the magnitude of this challenge? Today, the answer lies in the hands of Governor @GavinNewsom.
@Adriano_Mannino @davidchalmers42 RT von @Adriano_Mannino 21.07 20:54
this clip of me talking about AI consciousness seems to have gone wide. it's from a @worldscifest panel where @bgreene asked for "yes or no" opinions (not arguments!) on the issue. if i were to turn the opinion into an argument, it might go something like this: (1) biology can support consciousness. (2) biology and silicon aren't relevantly different in principle [such that one can support consciousness and the other not]. therefore: (3) silicon can support consciousness in principle. note that this simple argument isn't at all original -- some version of it can probably be found in putnam, turing, or earlier. note also that the (controversial!) claim that the brain is a machine (which comes down to what one means by "machine") plays no essential role in the argument. of course reasonable people can disagree about the premises! perhaps the key premise is (2) and it requires support. one way to support it is to go through various candidates for a relevant principled difference between biology and silicon and argue that none of them are plausible. another way is through the neuromorphic replacement argument that i discuss later in the same conversation. some see a tension between (1)/(3) and the hard problem. but there's not much tension: one can simultaneously allow that brains support consciousness and observe that there's an explanatory gap between the two that may take new principles to bridge. the same goes for AI systems. this isn't a change of mind: i've argued for the possibility of AI consciousness since the 1990s. my 1994 talk on the hard problem (https://www.youtube.com/watch?v=_lWp-6hH_6g) outlined an "organizational invariance" principle that tends to support AI consciousness. you can find versions of the two strategies above for arguing for premise 2 in chapters 6 and 7 of my 1996 book "the conscious mind". i'm not suggesting that current AI systems are conscious. but in a separate article on the possibility of consciousness in language models (https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/), i've made a related argument that within ten years or so, we may well have systems that are serious candidates for consciousness. the strategy in that article on LLM consciousness is analogous to the first strategy above in arguing for AI consciousness more generally. i go through the most plausible obstacles to consciousness in language models, and i argue that even if these obstacles exclude consciousness in current systems, they may well be overcome in a decade. of course none of this is certain. but i think AI consciousness is something we have to take seriously. [the full conversation with @bgreene and @anilkseth can be found at https://www.youtube.com/watch?v=06-iq-0yJNM]
@Adriano_Mannino @PhilPublica RT von @Adriano_Mannino 19.07 16:34
Deepfakes können neben Persönlichkeitsrechte auch demokratische Grundwerte gefährden. Ein Gesetzesvorhaben strebt an, manipulative Deepfakes unter Strafe zu stellen. Das geht aber nicht weit genug, sagt der Philosoph Adriano Mannino (@UCBerkeley). https://www.deutschlandfunkkultur.de/deepfakes-demokratie-bedrohung-kommentar-100.html
@Adriano_Mannino @NilsAlthaus RT von @Adriano_Mannino 16.07 09:54
Fällt die Herstellung und Verbreitung manipulativer Deepfakes unter die Meinungsäußerungsfreiheit? @Adriano_Mannino und ich sagen: nein - und hoffen auf einen breiten Konsens, solche Deepfakes gesetzlich zu verbieten. (1/10)
@Adriano_Mannino @mattyglesias RT von @Adriano_Mannino 03.06 14:32
The case for lab leak, from @Ayjchan. [My take: It's unknowable at this point given the lack of Chinese cooperation but it was bad that people inside and outside the government tried to shut this inquiry down] https://www.nytimes.com/interactive/2024/06/03/opinion/covid-lab-leak.html
@Adriano_Mannino RT von @Adriano_Mannino 05.04 19:59
Man stelle sich vor, es ginge um eine Impfung (oder sowas), von der die Fachexperten glauben, dass sie zu 5% oder auch nur 0.5% in der Population einen Totalschaden anrichten wird. Im KI-Bereich gibt es derzeit noch nicht einmal ein FDA-Analogon. 3/3
@Adriano_Mannino @KorbinianRueger RT von @Adriano_Mannino 04.04 06:23
Ich finde, der deutschsprachige Diskurs zu den Gefahren (allgemeiner) künstlicher Intelligenz ist in erster Linie polemisch und wenig hilfreich. Hier sind 6 grobe Missverständnisse, unter denen die Debatte aktuell leidet.
@Adriano_Mannino @rubinovitz RT von @Adriano_Mannino 20.03 03:05
The "Will GPT automate all the jobs?" paper is out With participation from @OpenAI, OpenResearch and @penn 🧵 1/9
@Adriano_Mannino 16.03 18:06
R to @Adriano_Mannino: PS. https://x.com/Adriano_Mannino/status/1636428103575584777?t=j5PP1cuwwe8Z3aKikd8LCQ&s=19
@Adriano_Mannino 16.03 18:05
R to @Adriano_Mannino: Auch verrückt ist natürlich die bisher fast komplett fehlende demokratische Kontrolle dieser Entwicklung. (Erfordert keine technische "openness".) 2/2
@Adriano_Mannino 16.03 18:03
GPT-4/5/6/7... ist eine Dual-Use-Technologie. Die unethischen, kriminellen, militärischen, aber auch die wohlintendierten & dennoch gefährlichen Anwendungen sind extrem zahlreich. "Openness" ist in solchen Fällen prima facie verrückt, wie AI Safety-Kreise stets moniert haben. 1/
@Adriano_Mannino RT von @Adriano_Mannino 15.03 19:45
Das wird politisch sehr tough – wir schaffen es ja kaum, mit den Klimarisiken vernünftig umzugehen, obwohl die Leugnung derselben ein Randphänomen darstellt. Die Leugnung der KI-Risiken dagegen ist – wie die Leugnung der Pandemierisiken vor 2020 – herrschende Meinung. 25/25
@Adriano_Mannino 15.03 19:45
R to @Adriano_Mannino: politische "Kontrollproblem" ist ungelöst: Wie stellen wir die gesellschaftlichen Bedingungen her, unter denen die KI-Sicherheitsforschung mit der davongaloppierenden KI-Kapazitätsforschung so weit Schritt halten kann, dass das technische Kontrollproblem rechtzeitig gelöst 23/
@Adriano_Mannino 15.03 19:45
R to @Adriano_Mannino: wird? Und wie stellen wir sicher, dass die KI-Anwendungen sozioökonomisch gerechte Auswirkungen haben werden? Aktuell ist die politische Lage so, dass hochgradig disruptive Technologien ohne jede demokratische Autorisierung auf die Gesellschaft losgelassen werden können. 24/
@Adriano_Mannino 15.03 19:45
R to @Adriano_Mannino: dass neuronale Netze nicht plötzlich gefährliche Ziele und Strategien verfolgen, wenn die Modelle ihren Trainingsbereich verlassen. Entsprechende Probleme sind bereits aufgetreten und werden sich mit zunehmender Komplexität der Modelle stark verschärfen. Auch das sozio- 22/
@Adriano_Mannino 15.03 19:45
R to @Adriano_Mannino: Was geschieht dann? Auf welcher Grundlage erhoffen wir uns, solche Entwicklungen in unserem Sinne kontrollieren und lenken zu können? Das technische "Kontrollproblem" ist ungelöst: Die Computerwissenschaft kann uns noch nicht sagen, wie sich allgemein sicherstellen lässt, 21/
@Adriano_Mannino @ShakeelHashim RT von @Adriano_Mannino 15.03 16:57
I just read through the GPT-4 "system card". Despite the bland name, it's a compendium of very scary things GPT-4 can do — and some of the examples really freaked me out. Content warning: this thread includes disturbing content which is racist, violent, and about self-harm.