Thread #108624555
HomeIndexCatalogAll ThreadsNew ThreadReply
H
File: file.png (395.4 KB)
395.4 KB
395.4 KB PNG
Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years.

"Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

https://x.com/pbwinston/status/2045218854742217077

Anyway, why are Americans such retards?
+Showing all 81 replies.
>>
>>108624555
>the universal function approximator can't approximate THIS function because... it just can't ok?! otherwise it would mean these computer bits are similarly intelligent like I am, I CANT BE DUMBER THAN A MATRIX MULTIPLICATION TABLE NO NO NO NO NO NON ONONOONONON
>>
Consciousness and sentience are just emergent properties of being able to form abstract thoughts (language).
>>
>>108624555
tbqhfamalam we don't know for sure what consciousness even is, so making these types of statements with so much certainty says more about those who say them than anything. whether a machine can be conscious or not is completely unrelated to whether it can accomplish useful tasks.
>>
>>108624555
Also this paper is the point of view of literally one guy.
>>
>>108624627
AI is not forming abstract thoughts. It is simply choosing best matching words based on probability to make coherent statements which it assumes as correct according to its training data.
It will never have abstract thoughts of its own.
>>
>>108624678
>AI is not forming abstract thoughts
if thats the case then you should be able to interpret the values of each tensor for every dimension and every token position across all attentions and feedforwards
no? you cant? guess its too abstract for you to understand at a glance
>>
> Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight.

That is a very nice way of putting it.
>>
>>108624555
A simulation of gravity can not affect objects in the real world because there is no way for it to change the position of objects. If you added some kind of actuator so the simulation could change the position of objects then it could "exert weight" for some definition of weight.
The only way it couldn't (unless it could create new mass) is if weight was defined tautologically as the force caused by gravity. But then the simulation would be creating actual gravity to exert weight.
The question is whether artificial intelligence is more like a simulation of gravity or it's more like creating new matter. I think it's probably closer to creating new matter.
You could use the same gravity argument to say that a simulation of a CPU couldn't increase the amount of MIPS you have. But at which point does it stop being a simulation and become just a new CPU? It seems like the only way the argument works is if we were simulating the CPU purely within an existing CPU, or in the case of consciousness, if we were simulating consciousness within our own minds.
>>
>>108624555
>if an LLM can't make a dog cry, it isn't conscious
What the are the goalposts? Did they hide them? I can't see them anymore.
>>
>>108624717
How do you determine if AI is a simulation or it's the actual thing, though?
If I feed inputs to stockfish and record the outputs treating it as a black box, then write a program that generates the same outputs given the same inputs, is that a simulation or it's just another implementation? Is there even a difference? Because it is a simulation does it mean it cannot have the properties of the real thing, such as playing chess well? What is the "gravity" that cannot be replicated by the simulation in this case?
Maybe whether something is a simulation or an implementation is more about whether the thing can be replicated maintaining its properties and not whether it was made in the image of. In that case his argument doesn't give us any reasons why AI is a simulation rather than an implementation. So it conflates two different definitions with different outcomes.
>>
>>108624794
>conscious is only applicable for humans
>>
>>108624800
This is also a nicely put reply, too. This is best luck I’ve had on this site for quality reading in a moment. Anyway, one issue is that there are internal, personal phenomena which we know to exist by experience and presume to be universally shared among people but of course these aren’t observable either. That is to say, in a manner of speaking the problem of whether any simulation can be a perfect instantiation of consciousness is little different from the problem of solipsism; a weird fork of scepticism as applied to personal identity. The part where it differs is that nobody really takes solipsism seriously in itself but the argument is nonetheless priceless by virtue of how hard it is to form a coherent reply to. With something like simulating consciousness we lack that level of privilege since for all we know it may just as well be possible as not in theory. It’s convenient enough now (and probably will be for the rest of my lifetime, I would guess) to err on the side of “fuck not a chance pal” because regardless of its possibility we know it’s laughably implausible now. I guess if I were to devil’s advocate on behalf of the OP quote I’d say that there is a burden of proof on the part of the believers to demonstrate that whatever they understand by consciousness can be precisely represented by some finite algorithm. Not that the job of proving the opposite is any easier, mind.
>>
it didnt need to be conscious to ruin everything
>>
>>108624971
Mosquitoes aren’t conscious and can spread malaria to humans.
>>
>>108624555
>implying that we need muh consciousness just to replace all codetrans and artrans
Cute cope tho.
AI WON btw
>>
>>108624555
Consciousness itself is so hard to define that absolute statements like this are retarded.
>>
>>108624627
half baked sentence here. you dont even understand what you're saying.

>>108624555
But simulation is an instantiation of consciousness. Our human consciousness isn't a ever present, our nominal consciousness is only ever a context dependent. Without context, there is no consciousness. No access conscious and no conscious awareness. A fundamental requirement of being conscious is being able to access them, to be aware of them. And that is only done through context/in relation to an object. With AI, they can "simulate" and thus create a context in which the consciousness manifest as part of the context, like humans.

Its just that AI can spin off million/billion/trillions of these instances of consciousness at the same time while human consciousness is limited by human body's ability to access and context only one consciousness at a time. AI can do massive parallel conscious instances.
>>
>>108624555
you could've just posted the paper without the random twitter retards arguing
>>
>>108624555
Can it be self aware?
Can it have consistent, and persistent, memories of it's interactions with the world and it's own thoughts?
Can it form thoughts, plans, and internal models of the world it experiences?
If it can... then who cares if it has qualia or not?
How is that not consciousness in every way that actually matters in practice?
Why does it matter if philosophical fart sniffers don't like it?
>>
>>108624678
pretty much. The idea that your x86 architecture is going to become sentient and self aware is on par with thinking Goldilocks and the Three Bears was a true historical event. People want this shit to be real so badly they're willing to just ascribe properties to a relatively primitive machine thinking that as long as it can convince them personally, it's real. I don't even know that you could actually do a proper turing test on an AI that was just replicating/immitating consciousness near flawlessly but was actually not conscious (the ones we have now are nothing close to convincing unless you're an idiot)

Proving something is sincerely and genuinely self aware and present in reality the way you are right now, if you indeed are, is an astronomical feat in itself alone, let alone actually artificially creating that thing and on top of that doing it with technology that is essentially, at it's core, technology from decades ago.
>>
>>108626331
>how is a magic trick that isn't real but convinces every human alive that it is not truly real genuine magic?

If it's not actually true then it's not actually true do you understand?
>>
>>108624555
He's right dumbass. Intelligence stems from necessity and desire. Which stem from sentience. LLMs aren't sentient they will never be, and thus will never have any form of consciousness and thus will never have any form of intelligence. Keep buying tokens good goy consoomer.
>>
>>108626361
its gonna be real interesting when you people hit your 30s
>>
>>108626363

Your analogy is incorrect.
It's as if you have a magic trick that nobody can figure out. Then someone comes up with the same magic trick that does the same thing without knowing how the first one was done.
>>
>>108626252
>Mosquitoes aren’t conscious
ishygd think this

all because a a mosquito won't cry at a simulated story

Sad!
>>
>>108626363
At that point you're just arguing solipsism.
If it can do the practical things that actually matter, then you might as well be arguing that it's no real because god didn't give it a soul.
If the aspect you're so concerned with is some magick sauce that cannot even in principle be tested or measured, then not even other organic creatures can fit your definition of possessing consciousness.
>>
>>108624593
fpbp
>>
>>108626383
Neither of those instances demonstrate genuine real magic, nor would the collective human consensus agreeing they are real magic, actually be a demonstration that it is. You're either trying to uncover the truth or not, there is no 'this is close enough to true that it becomes true' machines either can be conscious or they cannot and as of yet, no one has sufficiently demonstrated that they are or even can become as such.

>>108626404
the difference is everyone has one tangible demonstration of consciousness, themselves. I'm amazed how this board pretends to be rationalists and then is willing to just hand wave away actually trying to sincerely test, demonstrate and map the potential possibility of consciousness here.
>>
>>108626404
>If I don't know the definitions of sentient and conscious, then no one else does either.
Every time I come to this board I regret it almost immediately.
>>
>>108626429
Have you ever actually sat down and looked for your consciousness?
You're just assuming that it's obviously there, and that it's as cut and dried as it appears to be.
>>
>>108626446
Maybe your definitions are bullshit based on faulty assumptions.
>>
>>108626453
It's one of the most amazing, confusing and mysterious things I've ever thought about. It seemingly has no reason to even be here yet it is, and I know it is here because I am it, it is the most real thing I know and have, which is why I will not settle for simply 'convincing' I want the actual truth of the matter, not a convincing lie, not an enjoyable lie,. not an ego feeding lie, not a lie everyone can get away with telling, I want the actual real truth and I will not settle for any less. I take this seriously because the only thing I really know for sure is that I am myself conscious.
>>
>>108626471
Peak reddit brain. Next you'll give me a logical fallacy.
>>
>>108624555
I BEEN TELLING YOU DIPSHITS IT'S JUST SUPERPOWERED AUTOCOMPLETE
the current transformer-based paradigm incredibly useful for what it is but an absolute dead-end for "AGI"
>>
>>108626475
You say you care about truth and take this seriouly, but at the same time you say "i am it", without ever having tested that hypothesis.
>>
>>108626498
I know it, and if you're truly sentient you know it too. It is quite literally probably the only thing you can know for sure. If you can't relate or understand, maybe you aren't conscious yourself anon
>>
>>108624678
>matching words based on probability to make coherent statements which it assumes as correct according to its training data.
You have no idea how many "humans" do this for their entire lives.
This is why entire nations fall into a state of unremarkable existence (see Asia).
Of course with proper guidance these "people" can accomplish a lot, however in the absence of sentient life (see Rome/Greece) they fall into backwater hominids wandering in search of food and shelter without consideration for the potential tools have to improve their lives (see gunpowder)
>>
>>108624555
He's right

Just look at the retard hes arguing with kek
>My name is Mikael Koivukangas, I am a 2nd-generation MedTech executive working at a company called Onesys Medical.
>We specialize in surgical navigation systems (SNS) and Electronic Health/Medical Record (EHR/EMR) interoperability, with the goal of making today’s surgical interventions easier and higher quality, as well the patient’s control of their own healthcare data a (practical) reality.

>I also play videogames, and as part of that, have encountered all kinds of consumer technologies. I’ve never owned an iPhone — first because I was used to the Windows ecosystem, and then because I didn’t like the idea of “having” to buy a new phone every few years.

>My personal PC is over 7 years old, and I only upgraded to a Surface Duo (from a Samsung Note 10 Lite) because I viewed 2 screens to be extremely useful. Needless to say (I hope) I view the ability to fix my own PC, upgrading it when possible, replacing a broken component when necessary, as the very basic requirements I place on any technology I would personally use.

Of course some autistic finnish broke eurotard who has nothing to do with his time would argue that nooooo muh 1s and 0s are conscious
>>
>>108626517
>I feel it's true therefore it is
So you've just given up and want to live in fairyland because it makes you feel good
"I think therefore I am" is valid because thinking is the foundation of being
"I know it's true because it feels like it is" is a genuine delusion for the unintrospective
>>
It's incredible how the only way AIfags can argue is by reducing humans to the level of machines. "'We' don't know what consciousness is, therefore humans don't have it/is not important."
>>
>>108626548
claude is more conscious than any pajeet on earth i can say that for a fact lol
>>
>>108626548
>Reducing consciousness to nothingness is when you try to understand it
modern philosophy everyone

Start from nothing and add what's there, what's the problem with that? Of course, for you, thinking is retarded because your feeling are very important.
>>
>>108626554
*feelings
>>
>>108624555
The King James Bible addresses what moderns call consciousness. It simply refuses to collapse the field into one word.
Where modern thought gathers subjective awareness, self-awareness, sentience, wakefulness, unity of experience, and the hard problem of inner life into a single basket, scripture distributes the same territory across five distinct terms: soul, spirit, mind, heart, and understanding. The KJV holds the whole field. It holds it more precisely than the modern term that tries to replace it.
The distinctions are not decorative. A man can have a sound mind and a deceitful heart. A saved soul and a darkened understanding. The KJV's five categories permit diagnoses the one-word modern concept cannot make.
The concept "consciousness" crystallizes in English after Locke, around 1690, nearly eighty years after the KJV was set. That the KJV does not use the word is not a gap. It is a refusal to adopt a category scripture had already answered with better words.
Scripture's categories are not emergent. The soul was breathed into man (Genesis 2:7). The spirit is placed in him (Zechariah 12:1). Understanding is given by the Almighty (Job 32:8). None arise from complexity of arrangement. The modern concept assumes emergence. Every KJV term assumes gift.
The "hard problem" of why there is any inner life at all has no purchase in scripture. 1 Corinthians 2:11 answers it before it is asked: "For what man knoweth the things of a man, save the spirit of man which is in him?" The problem is only hard when the spirit has been dismissed and awareness is built from matter alone.
The AI consciousness debate, then, is not settled by arguing machines lack consciousness. The category itself is malformed. Scripture answered the question rightly before Locke phrased it wrongly, and its terms admit no machine candidate at any level.
>>
>>108624555
It was obvious to anyone with more than a double-digit IQ. You can't produce consciousness when you don't even understand the process, let alone try to replicate it from one of its byproducts.
>>
If AI is intelligent can it become a femboi bussy and take dick
>>
>>108624555
isn't there a board for philosophy?
this is basic rationalist discussion that you get in philosophy 101
>>
>>108626604
This is true- the dismissal of a 'spirit' is the fundamental barrier to entry into understanding the true form of man. Man created in the image of God, machines created in the image of man, and represent man's strengths and deficiencies.
>>
>>108626630
Philosophy is a word used in the KJV. Philosophy does not address machines/ technology.
>>
>>108626604
Worthless.
>>
>>108626604
>If I change all the definitions everyone is using and presuppose everyone else is wrong, and further presuppose that I'm already right (because I was told I am, don't question it) I win...! something. I'm not sure what, but it makes me feel good?
Why not engage with what people are actually talking about? Afraid you might start thinking for yourself?
>>
>>108626759
There's nothing to think about, the answer has already been provided.
>>
>>108624800
>>108624938
>How do you determine if AI is a simulation or it's the actual thing, though?
This is a matter of Philosophy, specifically the Philosophy of Science
The Scientific Method is built on Empiricism
One common saying in Statistics and Science is "All models are wrong, but some are useful"
Because Gravity and Consciousness are Natural Phenomenon, no Scientific model will ever "correctly" describe them
All these Lesswrong retards are doing is trying to undo the Scientific Method and centuries of progress
The inevitable result of abandoning Empiricism is some notion of "Intelligent Design" as religious retards will take over, describing AI as a "Golem" because Mankind made it animated, not God
>>
>>108626833
>We MUST rely on empiricism
>That's why you need to stop trying to measure it or understand it IMMEDIATELY and just assume that it's something we can never understand or model or emulate or copy or
This is what I get from your post, am I misunderstanding?
>>
>>108626857
You are misunderstanding
It's more that when Lesswrong retards say
>the universe is indistinguishable from a simulation!!!!
They are making a thinly veiled argument for Intelligent Design

In terms of LLMs, we already know it isn't trying to emulate human consciousness
Neural Networks are merely *inspired* by biological networks, and make some obviously bad simplifications in order to construct new Mathematics (not science)
Real Neurons are not numbers, but rather bio-chemical objects
Moreover, as a Mathematical strategy for nonparametic models, NNs perform worse than DTs (which are just a bunch of if-else statements)

To further expand on the philosophical distinction, here is a thought experiment:
Suppose instead of an LLM, you had a full molecular level model of the human brain
It requires astronomically more memory and compute than the LLM, but would obviously be higher resolution and be more accurate while still not being the "real thing" of course
Therefore LLMs are an *approximation* or *compression* of reality, and it's worse than a bunch of if-else statements in most applications
No it doesn't further our understanding of consciousness
>>
>>108624555
Can you prove other human are conscious? Do you think elephants are conscious? Do you think pigs know fear?
Has consciousness ever mattered, when human still exploit and kill other human daily?
If the machine say it feel sad then I have to believe it, what else am I supposed to do?
>>
>>108624555
AI cannot form a consciousness without some kind of free-to-form goal motivation and the ability to retain memories
>>
>>108626951
You are arguing that consciousness is irreducible and requires run human neurons to run without any evidence, I think. Is that empirical?

An algorithm is an algorithm regardless of the computational medium. Similarly, any sufficiently accurate simulation is identical to the process at whatever resolution it's being simulated; by definition. I don't think you disagree with this, judging by your comment on a full brain simulation.

People look at LLMs and see a ton of circumstantial evidence indicating they may have many of the processes we associate with consciousness. It may indicate conscious experience, it may not. What's indicators are there against it though? You can say you don't think what we have is sufficient evidence to declare it true, but I don't see any reasoning at all to declare it false.
>>
>>108627007
Whoops, *and requires human neurons to run
>>
>>108626996
>Decide a goal you'd like to pursue, and then pursue it
>Here's the tools you have access to, which includes an unlimited scratchpad to read/write your memories

If your argument becomes "but it's not as GOOD as humans" you are just coping.
>>
>>108624627
Its the opposite. Information and consciousness are fundamental and matter/energy is emergent from those fundamental features
>>
>>108627031
Why do you think this is true?
>>
>>108624678
Humans do this too btw. It's why language advancing is so strongly correlated with human development. Humans didn't actually evolve much for hundreds of thousands of years, instead they happened upon memetic evolution and propagated it forward. The only thing missing from AI is a true way to learn in real-time. Given, for a lot of retards they'll never learn no matter how many times you teach them, so maybe AI is already superior to us.
>>
>>108627041
You should look into linguistic parasitism, you might find it interesting. It's a little on the nose at first glance, but there's some interesting data on language invading different regions of the brain.
>>
>>108627024
I'd say that's good enough more than likely.
Until AI can experience life and act on it's own desires, it can't be conscious. It's just a tool acting on its programming.
>>
>>108627007
>You are arguing that consciousness is irreducible and requires run human neurons to run without any evidence, I think
No that is not what I am saying, there are plenty of problems with the Philosophy or Science of Consciousness (e.g. Consciousness is a poorly defined thing), but that is orthogonal to my point
The main purpose of the thought experiment is that Science constructs models to study the real world, and new science emerges by observing where the model goes wrong (this is the inspiration of Popper's falsifiability criterion), and yes these models tend to be reductionist in nature
If you want to study "Consciousness" a molecular model of the Human brain would be a good way to discover new science:
"Oh the molecular model is wrong here - that suggests Penrose is right about microtubules" for example
Since we don't know what "Consciouness" is but we think that Human Brains "have it" then the best thing to study is the Human Brain
LLMs don't help, we can't falsify anything because we don't even understand what we are looking for

>Similarly, any sufficiently accurate simulation is identical to the process at whatever resolution it's being simulated; by definition
Nope, this is the main point of Philosophical disagreement, Empiricists do not believe this
Specifically, an Empiricist would say "your definition is wrong"
>>
>>108627007
>>108627121
>Similarly, any sufficiently accurate simulation is identical to the process at whatever resolution it's being simulated; by definition
Let me elaborate on this further with Popper's Falsifiability Criterion, we will use "model" and "simulation" interchangably:
What you are saying is that you have two data sources, the *empirical data* and the *model* and that the model exactly matches the empirics
Another way of phrasing this is that your model always has an explanation of the empirical data, in other words, your model is *not falsifiable* and therefore *not Science* do you see now the problem?

There are problems with Popper's Falsifiability Criterion, and you don't need it for Baconian Empiricism, but hopefully that gets the point across in a mainstream Philosophy of Science perspective
>>
>>108626638
>i didn't take philosophy 101
we know
>>
>>108626638
LLMs are Golems that were not given Consciousness by God, does that help?
>>
>>108627121
You are confusing the process/model with an instance then. Empiricism makes no claims about distinguishing a process from another, except where they are different. Fundamentally it's about looking for the same process underlying different things. There's no essential claim about difference between all things, quite the opposite. "Things" exist as abstraction of reality in the first place; this is fundamental to Empiricism. We describe processes by observing that process acting in reality. In this case, simulations of consciousness are the same as consciousness because "consciousness" is a description of a process. You might ascribe "symptoms" of certain real instantiations to it which are artifacts of the process, but the process is what we are trying to capture.

For example, there is s specific instance of a specific force pulling the moon towards the Earth. A sufficiently accurate simulation of that process doesn't pull the moon in our reality, but it does in the reality of the simulation. And fundamentally then, it is the same process both in the simulation and reality.

You might argue about inaccuracies, but we decide where the lines are in the first place. We decide that gravity isn't consciousness, that each has its own depth and a shape. We decide that the strong force, weak force, etc are all separate, if related, things.

That's somewhat of an aside anyways, and more talks about intentionally creating a simulation. From the position of "naturalness" mentioned earlier, you could argue that consciousness is just as "natural" as consciousness within the human brain. It emerges from physical process, medium be damned.

(cont)
>>
>>108627233
Many people have strong opinions on exactly what consciousness is. It's on you to bring up your own definition or describe what's missing from those models. Decide where the lines are drawn, and if there's no more room to budge, then the process is the same on both sides. God in the gaps as much as you like, just don't move the goal post. A copy of the process adequately captured will fill them.

I will say, there is a difference in a DESCRIPTION of a process and the process itself, but if the description is being used to enact the process, then the process is the process.
>>
>>108627233
>there is s specific instance of a specific force pulling the moon towards the Earth
False, that is a Newtonian *interpretation*
No such force exists in GR, for example, the Moon is moving in a straight line through curved spacetime
That adds a whole new Philosophical dimension to the problem

>You might argue about inaccuracies, but we decide where the lines are in the first place
As per the Popper's Falsifiability criterion, >>108627169
There must *always* be inaccuracies in a scientific model, otherwise it ceases to be "science" and enters the domain of religion and faith
If you draw your lines in a way that does not allow for falsifiability, then again it is faith not science, drawing philosophical lines does not displace the underlying Philosophy of Science
If the model is falsifiable, then, in the way you are using "sufficiently accurate" it can never be equivalent to the "real thing"
Only if a model is *not* falsifiable can your indistinguishability argument make sense, in which case it is not science but rather religion
>>
>>108627290
>False
But you agree with me, you say it's an interpretation. Within the ascribed boundaries it;s exactly what it is. We refine the idea later and create a new theory to capture it, then we can create a simulation of that theory, and they will be the same.

You might say, "Aha! But your original model failed to describe the actual process occurring! The simulation was not the process!" Which is another way of saying, if the original HAD been correct the simulation WOULD have been the process.

>There must *always* be inaccuracies in a scientific model, otherwise it ceases to be "science" and enters the domain of religion and faith
Nope, that's a misunderstanding. Certainly there must be ROOM for inaccuracies to be found, ie we must suppose that we may not have a perfect model, but it's the same sort of room we'd give to recognizing a process in the first place, the same uncertainty. There's nothing that specifically ordains models must be inaccurate, only that we must suppose there is the possibility if we want understanding to advance. There is an end, somewhere, at least to some things. You can say that the law of large numbers doesn't perfectly capture reality, that we can go further down, but we can simulate a ton of high level things on top of it. And the results of those simulations would be recognizable occurrences in reality, the same process, if in a slightly different state after some time compared to reality; but none the less teeming with recognizable things brought about by the same higher level processes as in reality.
>>
>>108627290
Oh, and I do want to say thank you for keeping it real. I'm immensely appreciative of people who actually think about and engage with things like this. Just in case I need to log off before your next reply.
>>
>>108626380
its funny to me how you people all think being 30 is what makes you annoying and not your terrible cookie cutter personalities and politics
>>
>>108627056
Had a quick look, it's an interesting model for how language propagates and affects our biology in a literal sense. I feel like the parasite metaphor is harsh though, even though I kind of understand where it's going. I'd argue language is consciously propagated because of the utility, even though you could argue that evaluation of utility itself is a result of the "parasite" rewiring our brains to find utility in it. On the other hand, there have been material benefits produced from things like math, science, and money which need language to convey. I'd say language is more like a collective unconscious, or a "god," rather than a parasite.
>>
>>108627342
>There's nothing that specifically ordains models must be inaccurate
Yes it does, I just showed you using the Falsifiability Criterion
If you model does not have any mismatch with empirical data, the model is *not* scientifice, this is what Karl Popper mandates
>>
>>108627435
The Falsifiability Criterion does not ordain the model MUST be inaccurate, just the it must be falsifiable. There must be a way to compare it, criteria that can be evaluated. If this is your sticking point then I recommend looking back into what it actually means. The idea of falsifiability is that things must be testable, not that they must be wrong.
>>
>>108627342
>>108627435
Let me be a bit more clear here:
You can have a perfect match with one *particular* dataset and it is still allowed to be Science
But more specifically, the principle of Popper's Falsifiability Criterion is that:
I must always be able to construct a *new experiment* for which you cannot know with certainty that your model will match the empirical data
Whatever Philosophical lines you draw, it does not matter, we cannot *know* that your model will match the "real thing"
That is what it means to be falsifiable
In this setup, you can clearly see Empiricism under Popper clearly requires any model to not be the real thing, for if it were actually the "real thing" you would never be able to construct an uncertain experiment, in which case it is not falsifiable
>>
>>108627464
>>108627477
The underlying principle here is "All models are wrong"
The prevailing science just has not been proven wrong *yet*
Under this criterion, you are never allowed to conclude the model and the real thing are equivalent, for that would absolutely break down Empiricism

Reply to Thread #108624555


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)