On the "Eulering", I think the dialectic would be different if the fine tuning argument were posed informally, and treated as a vaguely strong, but difficult-to-quantify reason to believe in God.
But it's often not posed that way. Rather, proponents will say things like: "because of fine tuning, your prior for atheism needs to be 10^kajillion times greater than your prior for theism for the posterior of atheism to remain greater than the posterior for theism." And once the argument is posed that way, I think it's absolutely fair to take the measure problem very seriously, and to treat solving it as a prerequisite for making quantitative claims about the strength of the evidence provided by fine tuning.
I'm a *little* sympathetic to the Eulering objection, but 1) as Daniel Greco notes above, the FTA itself is a (IMO) spurious over-mathematization of a simple idea, and 2) the basic objection can be explained without fancy math: "we don't know what possible values the physical constants 'could have' taken, and we have no clear model that assigns 'probabilities' to possible values".
To me, the big issue with not being careful with the probability space isn't exactly, "you forgot to do this weird math; your viewpoint is this invalid", so much as, "by not doing this weird math explicitly you're allowing yourself to smuggle in a whole bunch of assumptions that are probably directly related to the crux of disagreement".
Can't add anything, but I also think it's a little silly to bring in an argument that requires math, and then act indignant that you have to do math. Yes, some stuff is contrintuitive and requires math, like building rockets, computer graphics, and philosophy when you use probability!:)
I agree that all this proof-of-god stuff from Bentham's Bulldog is bullshit. But all your arguments show too much.
Imagine a sci-fi setting. The Riemann Hypothesis turns out wrong! After long work, we found the smallest nontrivial exceptional zero of the zeta function. Supercomputers churn, we compute its real part to high precision... and its binary representation, viewed in a hex editor, contains the magic message "made by god with love, xoxo ;)". Is this proof of god?
You can't properly engage with the Bentham Bullshit without understanding why this would be a pretty strong proof that somebody with knowledge of that magical nontrivial zero messed with our history regarding development of the english language and the ascii code, playing a delayed prank on us.
On the other hand, a "proof of god" that purports to provide 12 bits (a Bayes factor of 5000) due to fine-tuning is laughable. Just selecting which crackpot proof-of-god forking path to engage with costs more than 12 bits.
But my argument would agree that we have no plausible explanation for the Riemann zero message—we know so much about strings and ASCII! We can quite reasonably say that a random string of digits is astronomically unlikely to write a grammatically correct english message in a hex editor, we can likely calculate that probability exactly!
The metaphysical question about the universal constants is emphatically NOT like this, it has a whole bunch of controversial assumptions about the nature of your random-universe-generator that must be solved before you can say something is “unlikely.” I think this objection is essentially identical to the aliens-holding-tacos objection of BB.
>Just as a note, I’m trying not to come off too elitist here so I’ll say this: I love teaching! I sincerely try to do a good job at explaining the math, and you all have been very kind! Please, ask me for clarification, I will do my level best to explain any point I make at any level of sophistication you ask: five-year-old, postgraduate, whatever.
I'm glad you wrote this, because I did not understand your section on statistical mechanics...and that turned out to be pretty central to your argument!
I think I was with you up until the Borel Set part. How does attaching a sigma-algebra let us meaningfully discuss the densities of the phase states? (Note: I have no idea what a sigma-algebra is). If all this lets us talk about "useful average quantities", what is the useful average quantity for aliens holding tacos? Can this math tell us how likely it is that the nearest alien is holding a taco? It sounds like you're saying it can; does that mean you can give me an actual number saying how likely it is? Or just that it is theoretically possible to do so?
As far as how much you have to dumb down your answer: the highest math I ever got to was Calculus. I aced the class but failed the AP exam. I always got good grades in math but I never liked math, so as soon as I didn't have to take math courses anymore I didn't.
Consider a 6 sided die. You have six individual outcomes. That’s your sample space. Collections of outcomes also have to be measurable—for example, the set of even numbers you can roll. That’s the sigma algebra, the collection of all sets of outcomes to which we can assign a probability. So for example, the set {2,4,6} (rolling even) is one element of the sigma algebra.
With continuous sample spaces, it’s a similar process to construct the sigma algebra. Essentially, instead of grouping discrete rolls on the die, you construct rules for adding together intervals, like [0,1] and [2,3]. There is more than one way to do this, the preferred method for stat mech is called a Borel algebra. Essentially, the Borel algebra contains all the “reasonable” subsets of the real line we might want to assign a probability, while intentionally avoiding some problematic non measurable sets.
So let’s say we’re interested in aliens holding tacos, and let’s ignore anything quantum and weird. Every atom in the universe has some position, and some momentum, each of which is a real number (actually a 3-vector) There are N atoms. This gives us 6N real numbers describing the possible states of the universe. Aliens holding tacos is some small subset of that total possible sets. The measure of states with aliens-holding-tacos divided by the measure of total universe states is the probability of the aliens-holding-tacos scenario, presumably very small.
Importantly, though I CANNOT give you a precise number for the alien-taco fraction of total universe states, I CAN rigorously describe the sample space (6N real numbers), the sigma algebra (Borel algebra, collections of these 6N numbers constructed according to some rules), and the measure (Lebesgue measure). So while I can’t execute the calculation, I can formally state the problem without any ambiguity or contradiction.
That's very helpful yes. To see if I understand you: you're saying that since atoms can be described using two 3-vectors, then we can define the total possible states of N atoms as 6N. Which means we can describe the probability of aliens holding tacos X as X/6N, even when we don't know what X is and what N is.
But, we can't do this for things like "possible ways the universal constants could have been" or stuff like that because those things, unlike atoms, can't be described using 6 variables? So the probability would be like X/Y where we can't actually define what either of those variable could be?
If I'm following you so far (and I recognize I may have gone off trail already), then questions about the probability of things made of atoms are describable but probability of things not made of atoms may not be. If so, I can see why you switched the problem from "Is the nearest alien thinking about tacos" to "is the nearest alien holding a taco" since subjective things like thoughts might arguably not be made of atoms. I know BB thinks they aren't made out of atoms, certainly.
But doesn't this mean that you're not actually addressing his alien taco objection? It seems like we can reasonably conclude that the nearest alien probably isn't thinking about tacos, even if thoughts aren't made of atoms. But if we can reason probabilistically about immaterial thoughts, then why not about all the fine tuning business?
Or would you say that we can't reason probabilistically about hypothetical non-material thoughts, even if it seems intuitively like we can, because they can't be described the way the position and momentum of atoms can?
You are essentially right, let me just correct the math a bit.
The X/6N formula is not quite right. 6N is the number of dimensions of the space, not its total size. Think of a room: its dimensionality is 3 (length, width, height), but its volume might be 1,000 meters cubed (big room). The probability P would be something like X/[integral over all possible states in 6N dimensions]
Now, for your main points:
For possible universes, the probability is like X/Y where we can't define Y. Precisely so, this is the central problem. Without a well-defined sample space, we cannot define either X or Y.
You also correctly deduced why I switched from "thinking" to "holding" tacos. A taco is a physical object, uncontroversially so. It allows us to define a region in our phase space (our X) without getting bogged down in the “are thoughts material” debate, which isnt relevant to my critique.
This all leads to your final question: "Or would you say that we can't reason probabilistically about hypothetical non-material thoughts... because they can't be described the way the position and momentum of atoms can?"
YES, EXACTLY. The reason we feel we can say "it's unlikely an alien is thinking about tacos" is because we are using an informal, intuitive sense of probability that doesn’t lend itself to doing math with it. This is BB’s core error.
If we managed to come up with some definition for “space of possible thoughts,” well then, now we’re in business and we can calculate this. Perhaps one is tempted to say thay thoughts are also configurations of atoms in brains…? But that’s a debate we’d have to resolve first, and without a construction of “the space of thought” we cannot calculate the probability of aliens thinking about tacos any more than we can calculate the probability of the universe’s physical constants without knowing what the possibilities are.
Thanks for taking the time to explain all of this.
The only thing I'm left with is that, while you may be right (I have no argument against it) that when we say "it's unlikely an alien is thinking about tacos" (assuming non-material thoughts) we are "using an informal, intuitive sense of probability that doesn't lend itself to doing math with it", it still seems to me that the non-math probability makes sense? I mean, it seems really unlikely that the closest alien is thinking about tacos, even if we can't use math to say how unlikely. It would seem unreasonable to say "because it is impossible to figure out the probability using numbers, we can't be justified in believing that it is unlikely the nearest alien is thinking about tacos."
On the other hand, since BB does use a lot of math and actual numbers in his fine tuning argument I can definitely see how it is a strong critique to say "You can't actually use math this way."
Just to be clear, we CAN rescue the alien thinking scenario by simply forcing thoughts to be purely physical. But more to the point, it does seem like this intuitive probability makes sense AND you can’t do math with it. But that’s a bit of a contradiction, no? That’s a clue that maybe it doesn’t make as much sense as you thought, and the exercise of trying to formalize it would lead you smack into the “are thoughts physical” problem, which is important, interesting, and has to be resolved first!
I don't think the responses from Scott or BB are substantive enough to undermine your original point. Sure, they forced you to clarify and reinforce certain parts of your original argument, but you have pretty convincingly proven in this context that Baynesian reasoning is ill-suited to task of determining the probability that God exists using the FTA
> "someone is optimizing for life" predicts fine-tuning within this range
Only after the fact, though, which points to an important limitation with purely Bayesian reasoning. If you have a "theory" that you've constructed to explain a set of facts and then you find out that it does, in fact, explain those facts, then those facts don't provide any real evidence for the theory. The theory is, at best, consistent with the facts, which it should be, given its provenance.
Not to mention that "theories" like "someone is optimizing for life" or "God exists and is a perfect being of goodness" aren't really theories in any meangingful sense and don't actually predict (or post-dict) anything specific (e.g., a constant in a mathematical model).
> I'm nervous that work like this is taking something that's obviously true - it's really weird that these constants are in the tiny realm suitable for life - and merely raising enough objections that you're not allowed to talk about it unless you have ten years of postgraduate math.
Why is it so weird that these constants are what they are? I don't share the intuition that this is surprising or weird. Even if I put aside Ian Jobling's convincing (to me, anyway) argument (https://substack.com/@ianjobling/p-163484791) and accept that the values of the constants in question are crucially important for life in our universe, I question the rationale for considering other values. Is it just because of the mathematical nature of them? That is, they are real numbers (more or less), and real numbers extend from negative infinity to infinity and fill in all the little gaps, so any possible mathematical value for a given constant is a possible physical value? Maybe I'm just restating some combination of the measure problem and the Brute Fact perspective...
Anyway, that's not even my main point. Rather, I find it endlessly fascinating and perplexing that Scott has "P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary." as his blog's tagline, but he can't be arsed to do his due diligence to back it up. The original measure problem post in response to BB required nothing like ten years of postgraduate math, and I don't recall ever seeing Scott use anything other than run of the mill frequentist statistical tests when he analyzes data.
Maybe this is overly harsh - I've enjoyed many of hist posts over many years, and I don't have anything against him - but his statistical incoherence is really salient to me.
"But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver, certainly not me - tries to do full Bayesian reasoning on fuzzy real-world problems. They’d be too hard to model. You’d make some philosophical mistake converting the situation into numbers, then end up much worse off than if you’d tried normal human intuition."
I think this is likely a good statement of his position. He does mention he's never been great at math, and in his defense, it IS legitimately hard to do bayesian inference, have you ever done markov chain monte carlo? You sure as hell don't do that in your head.
...But sometimes there is no substitute, and you kinda have to do the math, you know?
Very honored to have received a mention! (ExpressSmoke is my Reddit acct). Big fan both of you and BB. You two are unironically brilliant. Keep up the great work!
God's omnipresence has been an important doctrine for over a millennium, so these questions would arise even with a finite universe, and the answers are much the same.
I can't even begin to take seriously the idea that we could infer God from fine tuning. We have no idea how many universes exist, and no idea how many of those support life. It's all too undefined.
But, in among the more complex corners of this discussion, I find it very strange that some people just accept the idea that we can apply probability notions across scenarios that differ in number of observers under the assumption that we are equally likely to be any one observer. Who says we are equally likely to be any single observer?
All probability calculations require that we have some valid means of deciding what things are equally likely, and we get contradictory results if we normalise across universes or normalise across observers. And there is literally no right answer as to which one we should normalise over. We didn't start as souls in heaven and then get randomly assigned to an observer shell with equal probability across all shells. The idea is innately dualistic.
"Who says we are equally likely to be any single observer?"
This is the core of the "self-indication assumption" that Bentham's Bulldog likes or the "self-sampling assumption" that Nick Bostrom makes. One of the commenters, Onid, said he was writing an article on this that I'm looking forward to, because I am also deeply suspicious of this argument.
I was thinking of addressing this in a short post myself. I am no mathematician but I am keenly aware of the fact that probability makes no sense in isolation; it must be applied against defensible assumptions. Often those assumptions have to appeal to probability, in the set-up, so it is barely possible to define probability in non-circular terms.
I will look into what has already been written about SIA, but it seems to sneak in dualist assumptions.
As some others remarked, the bunch of axioms one uses for mathematics is to some extent a matter of personal taste, and is culturally contingent. If one uses different axioms, one gets different mathematical universes. Some theorems change, some proofs change, much stays similar. One could call this the "unreasonable effectiveness of intuitive mathematical reasoning" -- for the practical mathematician, foundations don't matter.
The mathematical community mostly converged on ZFC, back during the foundations debate. The ZFC universe of mathematics is very large (i.e. it is easy to show that objects "exist"). So this can be viewed as a trade-off: AC makes many proofs nice and simple for undergrads ("every vectorspace has a basis, nice easy existence proof that fits into a lecture and showcases how to Zorn"), but at the same time it invites "eldritch abominations from the abyss" like non-measurable sets into our world, and we mathematicians have to spend some effort keeping these abominations in check.
I think it's a good trade-off: Many proofs get easier with AC, a few proofs get slightly harder and some theorems get slightly more convoluted to state (well, add measurability to your assumptions, one extra word spent).
But it's naive to talk about that kind of stuff like it's objective truth, ripped straight from the hands of god.
On the "Eulering", I think the dialectic would be different if the fine tuning argument were posed informally, and treated as a vaguely strong, but difficult-to-quantify reason to believe in God.
But it's often not posed that way. Rather, proponents will say things like: "because of fine tuning, your prior for atheism needs to be 10^kajillion times greater than your prior for theism for the posterior of atheism to remain greater than the posterior for theism." And once the argument is posed that way, I think it's absolutely fair to take the measure problem very seriously, and to treat solving it as a prerequisite for making quantitative claims about the strength of the evidence provided by fine tuning.
Absolutely right. Concisely said. Not a wasted syllable. 👏
I'm a *little* sympathetic to the Eulering objection, but 1) as Daniel Greco notes above, the FTA itself is a (IMO) spurious over-mathematization of a simple idea, and 2) the basic objection can be explained without fancy math: "we don't know what possible values the physical constants 'could have' taken, and we have no clear model that assigns 'probabilities' to possible values".
To me, the big issue with not being careful with the probability space isn't exactly, "you forgot to do this weird math; your viewpoint is this invalid", so much as, "by not doing this weird math explicitly you're allowing yourself to smuggle in a whole bunch of assumptions that are probably directly related to the crux of disagreement".
Can't add anything, but I also think it's a little silly to bring in an argument that requires math, and then act indignant that you have to do math. Yes, some stuff is contrintuitive and requires math, like building rockets, computer graphics, and philosophy when you use probability!:)
Great follow-up!
Regarding your general argument:
I agree that all this proof-of-god stuff from Bentham's Bulldog is bullshit. But all your arguments show too much.
Imagine a sci-fi setting. The Riemann Hypothesis turns out wrong! After long work, we found the smallest nontrivial exceptional zero of the zeta function. Supercomputers churn, we compute its real part to high precision... and its binary representation, viewed in a hex editor, contains the magic message "made by god with love, xoxo ;)". Is this proof of god?
You can't properly engage with the Bentham Bullshit without understanding why this would be a pretty strong proof that somebody with knowledge of that magical nontrivial zero messed with our history regarding development of the english language and the ascii code, playing a delayed prank on us.
On the other hand, a "proof of god" that purports to provide 12 bits (a Bayes factor of 5000) due to fine-tuning is laughable. Just selecting which crackpot proof-of-god forking path to engage with costs more than 12 bits.
But my argument would agree that we have no plausible explanation for the Riemann zero message—we know so much about strings and ASCII! We can quite reasonably say that a random string of digits is astronomically unlikely to write a grammatically correct english message in a hex editor, we can likely calculate that probability exactly!
The metaphysical question about the universal constants is emphatically NOT like this, it has a whole bunch of controversial assumptions about the nature of your random-universe-generator that must be solved before you can say something is “unlikely.” I think this objection is essentially identical to the aliens-holding-tacos objection of BB.
>Just as a note, I’m trying not to come off too elitist here so I’ll say this: I love teaching! I sincerely try to do a good job at explaining the math, and you all have been very kind! Please, ask me for clarification, I will do my level best to explain any point I make at any level of sophistication you ask: five-year-old, postgraduate, whatever.
I'm glad you wrote this, because I did not understand your section on statistical mechanics...and that turned out to be pretty central to your argument!
I think I was with you up until the Borel Set part. How does attaching a sigma-algebra let us meaningfully discuss the densities of the phase states? (Note: I have no idea what a sigma-algebra is). If all this lets us talk about "useful average quantities", what is the useful average quantity for aliens holding tacos? Can this math tell us how likely it is that the nearest alien is holding a taco? It sounds like you're saying it can; does that mean you can give me an actual number saying how likely it is? Or just that it is theoretically possible to do so?
As far as how much you have to dumb down your answer: the highest math I ever got to was Calculus. I aced the class but failed the AP exam. I always got good grades in math but I never liked math, so as soon as I didn't have to take math courses anymore I didn't.
Let me see if I can explain better.
Consider a 6 sided die. You have six individual outcomes. That’s your sample space. Collections of outcomes also have to be measurable—for example, the set of even numbers you can roll. That’s the sigma algebra, the collection of all sets of outcomes to which we can assign a probability. So for example, the set {2,4,6} (rolling even) is one element of the sigma algebra.
With continuous sample spaces, it’s a similar process to construct the sigma algebra. Essentially, instead of grouping discrete rolls on the die, you construct rules for adding together intervals, like [0,1] and [2,3]. There is more than one way to do this, the preferred method for stat mech is called a Borel algebra. Essentially, the Borel algebra contains all the “reasonable” subsets of the real line we might want to assign a probability, while intentionally avoiding some problematic non measurable sets.
So let’s say we’re interested in aliens holding tacos, and let’s ignore anything quantum and weird. Every atom in the universe has some position, and some momentum, each of which is a real number (actually a 3-vector) There are N atoms. This gives us 6N real numbers describing the possible states of the universe. Aliens holding tacos is some small subset of that total possible sets. The measure of states with aliens-holding-tacos divided by the measure of total universe states is the probability of the aliens-holding-tacos scenario, presumably very small.
Importantly, though I CANNOT give you a precise number for the alien-taco fraction of total universe states, I CAN rigorously describe the sample space (6N real numbers), the sigma algebra (Borel algebra, collections of these 6N numbers constructed according to some rules), and the measure (Lebesgue measure). So while I can’t execute the calculation, I can formally state the problem without any ambiguity or contradiction.
Does this help?
That's very helpful yes. To see if I understand you: you're saying that since atoms can be described using two 3-vectors, then we can define the total possible states of N atoms as 6N. Which means we can describe the probability of aliens holding tacos X as X/6N, even when we don't know what X is and what N is.
But, we can't do this for things like "possible ways the universal constants could have been" or stuff like that because those things, unlike atoms, can't be described using 6 variables? So the probability would be like X/Y where we can't actually define what either of those variable could be?
If I'm following you so far (and I recognize I may have gone off trail already), then questions about the probability of things made of atoms are describable but probability of things not made of atoms may not be. If so, I can see why you switched the problem from "Is the nearest alien thinking about tacos" to "is the nearest alien holding a taco" since subjective things like thoughts might arguably not be made of atoms. I know BB thinks they aren't made out of atoms, certainly.
But doesn't this mean that you're not actually addressing his alien taco objection? It seems like we can reasonably conclude that the nearest alien probably isn't thinking about tacos, even if thoughts aren't made of atoms. But if we can reason probabilistically about immaterial thoughts, then why not about all the fine tuning business?
Or would you say that we can't reason probabilistically about hypothetical non-material thoughts, even if it seems intuitively like we can, because they can't be described the way the position and momentum of atoms can?
You are essentially right, let me just correct the math a bit.
The X/6N formula is not quite right. 6N is the number of dimensions of the space, not its total size. Think of a room: its dimensionality is 3 (length, width, height), but its volume might be 1,000 meters cubed (big room). The probability P would be something like X/[integral over all possible states in 6N dimensions]
Now, for your main points:
For possible universes, the probability is like X/Y where we can't define Y. Precisely so, this is the central problem. Without a well-defined sample space, we cannot define either X or Y.
You also correctly deduced why I switched from "thinking" to "holding" tacos. A taco is a physical object, uncontroversially so. It allows us to define a region in our phase space (our X) without getting bogged down in the “are thoughts material” debate, which isnt relevant to my critique.
This all leads to your final question: "Or would you say that we can't reason probabilistically about hypothetical non-material thoughts... because they can't be described the way the position and momentum of atoms can?"
YES, EXACTLY. The reason we feel we can say "it's unlikely an alien is thinking about tacos" is because we are using an informal, intuitive sense of probability that doesn’t lend itself to doing math with it. This is BB’s core error.
If we managed to come up with some definition for “space of possible thoughts,” well then, now we’re in business and we can calculate this. Perhaps one is tempted to say thay thoughts are also configurations of atoms in brains…? But that’s a debate we’d have to resolve first, and without a construction of “the space of thought” we cannot calculate the probability of aliens thinking about tacos any more than we can calculate the probability of the universe’s physical constants without knowing what the possibilities are.
Thanks for taking the time to explain all of this.
The only thing I'm left with is that, while you may be right (I have no argument against it) that when we say "it's unlikely an alien is thinking about tacos" (assuming non-material thoughts) we are "using an informal, intuitive sense of probability that doesn't lend itself to doing math with it", it still seems to me that the non-math probability makes sense? I mean, it seems really unlikely that the closest alien is thinking about tacos, even if we can't use math to say how unlikely. It would seem unreasonable to say "because it is impossible to figure out the probability using numbers, we can't be justified in believing that it is unlikely the nearest alien is thinking about tacos."
On the other hand, since BB does use a lot of math and actual numbers in his fine tuning argument I can definitely see how it is a strong critique to say "You can't actually use math this way."
Just to be clear, we CAN rescue the alien thinking scenario by simply forcing thoughts to be purely physical. But more to the point, it does seem like this intuitive probability makes sense AND you can’t do math with it. But that’s a bit of a contradiction, no? That’s a clue that maybe it doesn’t make as much sense as you thought, and the exercise of trying to formalize it would lead you smack into the “are thoughts physical” problem, which is important, interesting, and has to be resolved first!
I'm not sure how much of a contradiction it is. If things exist that are not made of atoms, shouldn't we still be able to reason about them?
I don't think the responses from Scott or BB are substantive enough to undermine your original point. Sure, they forced you to clarify and reinforce certain parts of your original argument, but you have pretty convincingly proven in this context that Baynesian reasoning is ill-suited to task of determining the probability that God exists using the FTA
> "someone is optimizing for life" predicts fine-tuning within this range
Only after the fact, though, which points to an important limitation with purely Bayesian reasoning. If you have a "theory" that you've constructed to explain a set of facts and then you find out that it does, in fact, explain those facts, then those facts don't provide any real evidence for the theory. The theory is, at best, consistent with the facts, which it should be, given its provenance.
Not to mention that "theories" like "someone is optimizing for life" or "God exists and is a perfect being of goodness" aren't really theories in any meangingful sense and don't actually predict (or post-dict) anything specific (e.g., a constant in a mathematical model).
> I'm nervous that work like this is taking something that's obviously true - it's really weird that these constants are in the tiny realm suitable for life - and merely raising enough objections that you're not allowed to talk about it unless you have ten years of postgraduate math.
Why is it so weird that these constants are what they are? I don't share the intuition that this is surprising or weird. Even if I put aside Ian Jobling's convincing (to me, anyway) argument (https://substack.com/@ianjobling/p-163484791) and accept that the values of the constants in question are crucially important for life in our universe, I question the rationale for considering other values. Is it just because of the mathematical nature of them? That is, they are real numbers (more or less), and real numbers extend from negative infinity to infinity and fill in all the little gaps, so any possible mathematical value for a given constant is a possible physical value? Maybe I'm just restating some combination of the measure problem and the Brute Fact perspective...
Anyway, that's not even my main point. Rather, I find it endlessly fascinating and perplexing that Scott has "P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary." as his blog's tagline, but he can't be arsed to do his due diligence to back it up. The original measure problem post in response to BB required nothing like ten years of postgraduate math, and I don't recall ever seeing Scott use anything other than run of the mill frequentist statistical tests when he analyzes data.
Maybe this is overly harsh - I've enjoyed many of hist posts over many years, and I don't have anything against him - but his statistical incoherence is really salient to me.
You know, I recall a statement Scott made in this article (https://www.astralcodexten.com/p/practically-a-book-review-rootclaim).
"But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver, certainly not me - tries to do full Bayesian reasoning on fuzzy real-world problems. They’d be too hard to model. You’d make some philosophical mistake converting the situation into numbers, then end up much worse off than if you’d tried normal human intuition."
I think this is likely a good statement of his position. He does mention he's never been great at math, and in his defense, it IS legitimately hard to do bayesian inference, have you ever done markov chain monte carlo? You sure as hell don't do that in your head.
...But sometimes there is no substitute, and you kinda have to do the math, you know?
Very honored to have received a mention! (ExpressSmoke is my Reddit acct). Big fan both of you and BB. You two are unironically brilliant. Keep up the great work!
Congratulations on all of the engagement on your article! I read it and it was pretty much over my head, but I always enjoy your writing.
This has been an entertaining and enlightening debate. I'm on your side.
But where did BB come up with infinite aliens and tacos?? In his cosmology, the universe is finite. A giant snow globe.
Or do we somehow get n>1 alien civilizations per planet?
Is his cosmology really finite? That would really surprise me given some of his other views…
Don't all supernaturalists believe the universe is finite??? Where's their Superbeing otherwise?
He thinks the universe(s) is (are) infinite, but that God is non-spatial and so does not compete with them for space.
That's a fudge. You mean like in a 5th dimension, or in between the superstrings?
Either way, sounds more like Spinoza than Yahweh.
Non spatial is non some extra extra dimension, immaterial is immaterial, not soul stuff.
God's omnipresence has been an important doctrine for over a millennium, so these questions would arise even with a finite universe, and the answers are much the same.
Where is Heaven?
I can't even begin to take seriously the idea that we could infer God from fine tuning. We have no idea how many universes exist, and no idea how many of those support life. It's all too undefined.
But, in among the more complex corners of this discussion, I find it very strange that some people just accept the idea that we can apply probability notions across scenarios that differ in number of observers under the assumption that we are equally likely to be any one observer. Who says we are equally likely to be any single observer?
All probability calculations require that we have some valid means of deciding what things are equally likely, and we get contradictory results if we normalise across universes or normalise across observers. And there is literally no right answer as to which one we should normalise over. We didn't start as souls in heaven and then get randomly assigned to an observer shell with equal probability across all shells. The idea is innately dualistic.
I am reminded of the Bertrand Paradox.
https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)
"Who says we are equally likely to be any single observer?"
This is the core of the "self-indication assumption" that Bentham's Bulldog likes or the "self-sampling assumption" that Nick Bostrom makes. One of the commenters, Onid, said he was writing an article on this that I'm looking forward to, because I am also deeply suspicious of this argument.
I was thinking of addressing this in a short post myself. I am no mathematician but I am keenly aware of the fact that probability makes no sense in isolation; it must be applied against defensible assumptions. Often those assumptions have to appeal to probability, in the set-up, so it is barely possible to define probability in non-circular terms.
I will look into what has already been written about SIA, but it seems to sneak in dualist assumptions.
Regarding ZFC / Vitali Sets / Banach-Tarski:
As some others remarked, the bunch of axioms one uses for mathematics is to some extent a matter of personal taste, and is culturally contingent. If one uses different axioms, one gets different mathematical universes. Some theorems change, some proofs change, much stays similar. One could call this the "unreasonable effectiveness of intuitive mathematical reasoning" -- for the practical mathematician, foundations don't matter.
The mathematical community mostly converged on ZFC, back during the foundations debate. The ZFC universe of mathematics is very large (i.e. it is easy to show that objects "exist"). So this can be viewed as a trade-off: AC makes many proofs nice and simple for undergrads ("every vectorspace has a basis, nice easy existence proof that fits into a lecture and showcases how to Zorn"), but at the same time it invites "eldritch abominations from the abyss" like non-measurable sets into our world, and we mathematicians have to spend some effort keeping these abominations in check.
I think it's a good trade-off: Many proofs get easier with AC, a few proofs get slightly harder and some theorems get slightly more convoluted to state (well, add measurability to your assumptions, one extra word spent).
But it's naive to talk about that kind of stuff like it's objective truth, ripped straight from the hands of god.