# Hydroxychloroquine study



## Michi (Sep 22, 2020)

Following up on an earlier post on the advantages and failings of peer review…

The Guardian just published this article about the Lancet's retraction of a hydroxychloroquine study.

If you don't want to read it in full, the TLDR is that

The Lancet published a study claiming to show that hydroxychloroquine causes a higher death rate and more heart-related complications in Covid-19 patients.
The study relied on a data set from the Surgisphere database.
Multiple researchers pointed out that figures cited by the authors of the paper did not line up with official data.
One of the co-authors is the founder of the database.
Following the complaints, information about Surgisphere was deleted from the Internet.
None of the other authors had seen the data first-hand and were denied access to the dataset.
Publication of the study led to a temporary stop of controlled studies into the efficacy of the drug.
The Lancet has changed its peer review policy to be more stringent.
Note that the above does not mean that hydroxychloroquine is efficacious. Other studies have shown that it is not. But the publication of the paper—most likely based on fabricated data—did halt studies that, otherwise, would not have been delayed.

More importantly, this episode shows that the scientific method and peer review process are, in the long run, self-correcting and effective. There will always be some idiots who fabricate results and sneak them past reviewers for ulterior motives. But, in the end, the truth comes out on top.


----------



## MarcelNL (Sep 22, 2020)

Michi said:


> Following up on an earlier post on the advantages and failings of peer review…
> 
> The Guardian just published this article about the Lancet's retraction of a hydroxychloroquine study.
> 
> ...



the sentence in bold should have caused the 'other authors' to be wary of the data.....

I keep being amazed how easy it seems for HCPs to publish articles of which we only *much* later (this is relatively fast!) find out that data is misrepresented, altered, manipulated or even 'invented'. I work in 'the industry' where it rains audits and inspections to ensure data quality, and where repercussions for 'incidents' like this are very real and severe (like they should).


----------



## VicVox72 (Sep 22, 2020)

I am in a different academic discipline than medicine and I am always amazed by how amazingly bad the empirical work of doctors is. 

The sad fact is that almost no doctors know any stats or have any "data science" skills whatsoever. They rely on external data managers and external biostatistics people to work their data and run their statistical analysis. Their contribution to the paper is some "subject matter expertise" which often ends up being half baked implausible theories from half understood biology concepts. 

The lancet has a history of horribly insufficient peer review, inability to check authors claims, and lack of understanding of basic statistical methodology. 

As my stats professor used to joke "the people most opposed to evidence based medicine are doctors"


----------



## nakiriknaifuwaifu (Sep 22, 2020)

VicVox72 said:


> I am in a different academic discipline than medicine and I am always amazed by how amazingly bad the empirical work of doctors is.
> 
> The sad fact is that almost no doctors know any stats or have any "data science" skills whatsoever. They rely on external data managers and external biostatistics people to work their data and run their statistical analysis. Their contribution to the paper is some "subject matter expertise" which often ends up being half baked implausible theories from half understood biology concepts.
> 
> ...



While I'm not sure I agree with you on the first part on doctors, the Lancet is a dumpster fire of a journal.

Guess which journal published (and then retracted) the paper associating autism with the MMR vaccine...

This is an interesting article hypothesizing the role of bradykinins in severe COVID cases: Bradykinin and the Coronavirus


----------



## juice (Sep 23, 2020)

Michi said:


> More importantly, this episode shows that the scientific method and peer review process are, in the long run, self-correcting and effective.


This is pretty dependent on the field, though, which I think is an important point. This got caught because the breaches were incredibly flagrant, and it should NEVER have got to the publishing stage. This is less a triumph of the system than a cautionary tale about taking some fields at face value when it comes to "peer-reviewed" studies.

As @ian can attest, hard sciences like maths are far more rigorous and less open to mates being able to "peer-review" papers.


----------



## MarcelNL (Sep 23, 2020)

VicVox72 said:


> I am in a different academic discipline than medicine and I am always amazed by how amazingly bad the empirical work of doctors is.
> 
> The sad fact is that almost no doctors know any stats or have any "data science" skills whatsoever. They rely on external data managers and external biostatistics people to work their data and run their statistical analysis. Their contribution to the paper is some "subject matter expertise" which often ends up being half baked implausible theories from half understood biology concepts.
> 
> ...


The Lancet is one of the top journals in medicine, getting a paper published is difficult in most ranking journals but even more so with The Lancet. Editors and reviewers for those journals typically are highly renowned experts in their field, yes that review is inclusive of statisticians. Is there some selection bias or are their numbers of fraud/retractions higher than any other journal? 

MD's get a pretty solid background in stats, the few MDs I frequently work with can tell most biostatisticians a thing or two simply because they are subject matter experts in their area or expertise. The bigges FU-s in protocol writing that I have recently seen were made by statisticians designing analysis plans for topics they had no clue about. By itself statistics does not add much to science other than methodology; Statistics can prove that statistics can prove anything'...Don't get me wrong, you need a good statistician to write a decent protocol.

Articles in high ranking journals such as the Lancet are most definitely not based on empirical dasta (you'd be hard pressed to get even a case report published based on empirical data alone), the problem IMO is caused by a lack of backbone in Academia to resist the pressure to publish (the most cynical incentive to use by universities and governments if data quality and integrity are of any importance) and a practical issue of repeatability and ethics. 

Doing clinical studies means you need patients, an experimental drug or method , approvals, consents, (lots of) money. The days of Mengele and Tuskegee are behind us, even if not that long.
Imagine what happens if you want to see if the results published by Dr X are any good and want to repeat the experiment, you somehow get a bucket of funding, write up a protocol (BTW ; including a statistical analysis plan, background, hypothesis etc, see ICH chapter), submit to an ethics committee...likely to get a NO-GO because the experiment has been done as you explained in your background section in the protocol, and the research adds nothing new so the risk benefit ratio just is not favourable for patients.

Unless the data used for the original paper is checked and scrutinized in a similar way as it is when a pharmaceutical company initiates clinical research to get a drug approved the data may be subject to fraud that is difficult to detect. Is it fair to blame a journal for someone who wants 5 minutes of fame at the cost of science and patients or 'needs to publish to keep their position/status/faculty ranking up? I don't think so, at the same time; scientific fraud within academia is hardly ever prosecuted for the crime against humanity it really is...


----------



## ian (Sep 23, 2020)

juice said:


> As @ian can attest, hard sciences like maths are far more rigorous and less open to mates being able to "peer-review" papers.



Absolutely!

Although of the two review jobs I have to do in the next couple weeks, one is for a paper written by my mathematical grandma (advisor’s advisor), and the other is written by one of my former coauthors and good friends, collaborating with a guy I’ve hosted multiple times at my house and a woman that I’ve hung out with a few times at bars at conferences.

It’s a small field. 

I’ve seen a couple papers in my field eventually be exposed as wrong, but that’s a couple out of hundreds or thousands. Many papers have small errors: that’s kind of expected, and not a big deal if it doesn’t kill the main results. Most arguments are robust enough to survive small modifications.

If a paper in hyperbolic geometry is wrong, though, the consequences for the rest of the world are rather minimal.


----------



## Nemo (Sep 23, 2020)

ian said:


> If a paper in hyperbolic geometry is wrong, though, the consequences for the rest of the world are rather minimal.


Unless you are describing the geometry of spacetime, in which case the consequences could be quite severe


----------



## ian (Sep 23, 2020)

Hah, so you too dream about a world in which natural laws are dictated by scientific papers, rather than the other way around. The power, the unimaginable poowwer!


----------



## MarcelNL (Sep 23, 2020)

Nepotism is pretty much out of the equation, peer review is not done by 'mates' but by peers with their own professional attitude and reputation. Neither is it a requirement that the reviewers do not know or do not like the author, in most cases the author does not know who reviewed the manuscript anyway.
I've done 'some' bartime with folks that happily drill holes, for real or perceived, in manuscripts I'm one of the authors for...peer review is not infallible, reviewers do not necessarily have all the knowledge the authors have but they usually are quite capable of checking methodology and conclusions drawn. The biggest gap as I see it is in quality control of the raw data, from collection to analysing. Things are improving but slowly.


----------



## VicVox72 (Sep 23, 2020)

In lots of disciplines peer review is not double blind -- the reviewers know the identity of the authors. And in many disciplines, the set of "peers" (ie people with the subject matter expertise to competently evaluate a paper) is really small. So you often end up with situations where even the authors more or less know who the referees are because it could only be five different people and they know the opinion of three of them already on the paper because they discussed it at a conference.

So nepotism is most definitely not out of the question for many many disciplines and subfields. 

Back to MDs and stats -- that's great that you work with competent MDs. Good that some subfields of medicine do proper work. 

But the given lancet study had such humongously obvious flaws that I don't think we can trust any of the referees or editors to ever participate in proper peer review again. The data set was obviously fake -- the sample sizes did not even remotely correspond to what was going on in the areas studied. Any referee could have cross checked the summary stats in the paper with what is public info regarding COVID and hospitals.... As a lot of independent third party researchers did! Why the Lancet referees don't do that is a mystery to me.

I understand that you have to go on faith that the data isn't made up in general when doing reviewing. You can't, as a referee, check each row of the data set, go back to the lab and ask if the study was actually run etc etc. That is infeasible. But when the data was not even shared with most of the authors on a paper, the paper is incredibly consequential, the data stems from an unknown new company, maybe at least check if the data passes the most basic smell test ?


----------



## MarcelNL (Sep 23, 2020)

I do agree that a reviewer cannot and should not even begin to check each data line, I'd advocate a system of data quality control at its source like it's done with sponsor initiated clinical studies. Think another major issue is that publication pressure is incredibly which may lower peoples threshold to check what they sign off on...I do wonder how they ever qualified for authorship acc to the ICMJE guidelines...


----------



## VicVox72 (Sep 23, 2020)

MarcelNL said:


> I do agree that a reviewer cannot and should not even begin to check each data line, I'd advocate a system of data quality control at its source like it's done with sponsor initiated clinical studies. Think another major issue is that publication pressure is incredibly which may lower peoples threshold to check what they sign off on...I do wonder how they ever qualified for authorship acc to the ICMJE guidelines...



In other disciplines, you need to upload "replication packages" that produce the entire paper from the raw data (if the raw data can be shared) or at least the statistical code to do so + whatever intermediate data is sharable. Something like that would have already helped here because it would make it much easier and more transparent for a reviewer to check if they even ran the code for their analysis, inspect (not by hand, statistically) intermediate data etc.

The higher the stakes, the better and more stringent the peer review needs to be. Up to and including doing basic "accounting fraud" checks for the data. Such as "do the number of people in the sample exceed the positive covid cases in a locality?"


----------



## MarcelNL (Sep 23, 2020)

clinical studies are a bit more complicated than that, there are all kinds of restrictions, data privacy may sound as a welcome thing for us all yet there are areas where it works against us. This is one of them.

It's one thing to bash one study where misconduct was discovered (if far too late), that does not mean that medical science does not work. There are S%^&loads of articles on treatment of COVID patients that actually helps to advance treatment.


----------



## VicVox72 (Sep 23, 2020)

I mean, I work in an empirical discipline. I don't want to sound like I am bashing all research. Lots of important and good research gets done every day by super diligent honest researchers that put all their hours and all their energy into getting it right. 

But that only RAISES the bar for the top medicine journal, for example, to ensure that what they publish goes through proper review. The flip side of bad peer review is that you not only promote bad science but that you don't reward good science either. And what's the quickest way to turn an honest diligent researcher into a fraudster? I'd guess having your paper rejected because the referee doesn't understand basic stats and/or was too lazy to read your methods section, while seeing some inflammatory flashy bs be published in the same week. why put your early mornings and late evenings in when you can get a higher chance of a publication with a bogus statistical method, just dumb enough for the average bored and lazy referee, and made up data? 

(As an aside: even if the lancet study's data had been real, the paper was junk.... Which it shares with the lancet vaccine paper. That paper was fraudulent in terms of data, but even dumber in terms of stats. It shouldn't ever have been published even if the data was real)


----------



## nakiriknaifuwaifu (Sep 23, 2020)

The publish or perish mentality has been a real detriment to science and proper scientific research.


----------



## ian (Sep 23, 2020)

nakiriknaifuwaifu said:


> The publish or perish mentality has been a real detriment to science and proper scientific research.









Image credit: SMBC


----------



## MarcelNL (Sep 23, 2020)

yeah as if putting out articles helps to advance science, look in any lower ranking paper and you'll see plenty articles on stuff already published several times but now with a minor insignificant tweak, heck usually the originals or first copies are referenced in the articles...keeps plenty folks real busy but it does nothing for actual progres.


----------



## ian (Sep 23, 2020)

MarcelNL said:


> yeah as if putting out articles helps to advance science, look in any lower ranking paper and you'll see plenty articles on stuff already published several times but now with a minor insignificant tweak, heck usually the originals or first copies are referenced in the articles...keeps plenty folks real busy but it does nothing for actual progres.



This is a really dismissive comment. It’s true that a small number of papers are responsible for most of the progress in a field, but you don’t necessarily know which they’re going to be before they’re completed, or even immediately after they’re published. And even those eventually important papers are influenced by lots of other smaller papers. I’ve had multiple experiences with publishing papers in great journals, while citing smaller works published in lesser journals, and I’ve also published a lot of stuff in lesser journals too. Regarding your objections to trivial modifications of existing studies, I thought reproducing the results of important studies was supposed to be important too? Wasn’t that kind of the argument above?

Also, it’s not unreasonable to think that universities and agencies issuing research grants should evaluate a researcher’s previous output when considering future funding.... are you suggesting they just give everyone in the field equal funding and 20 years to complete their dream projects? Sounds like a massively inefficient way to allocate funds. And many scientists need some sort of external motivation to get themselves moving, too. I know I do most of my work when there’s a deadline somewhere in the future.

All that said, tenure does take pressure off some scientists, allowing them to focus their attention on things they really care about. I’ve definitely felt that in the past few years, since I was lucky enough to get tenure and also a 5 yr government grant around the same time. But at the same time, I haven’t been as productive as I was when deadlines were looming. Now that I’m getting closer to having to reapply for a grant, the motivation has returned in full force. (The tenure exception also doesn’t really apply in fields where all research requires massive funding, though.)

Finally, one should realize that “publish or perish” doesn’t mean “publish some trivial crap and you’ll get all the funding you want”. The quality of your output determines the kind of job and amount of funding you’ll get. So there is actually motivation to work on quality projects, not just drivel.

TLDR: yea, publish or perish probably results in some trivial papers being published, but you got a better idea for how to allocate funds and motivate scientists?


----------



## VicVox72 (Sep 23, 2020)

MarcelNL said:


> yeah as if putting out articles helps to advance science, look in any lower ranking paper and you'll see plenty articles on stuff already published several times but now with a minor insignificant tweak, heck usually the originals or first copies are referenced in the articles...keeps plenty folks real busy but it does nothing for actual progres.



I don't want to come across as being needlessly confrontational to each of your messages but .. this is just such a weird thing to complain about. 

In literally every single empirical discipline the largest problem standing in the way of progress is a LACK of independent replication. Ie "trivial modification of what is already known". That's literally the thing we lack the most as it stands. 

Every single empirical field has a huge "false positive" issue, with hundreds of studies being published that have almost no chance of being replicated. And it's not just the usual suspects, say, social psychology, that pump out false findings by the minute. Cognitive psych, medicine, sociology, economics, education, etc are only marginal degrees better (and some like nutrition and criminology are even worse). 

What we need is people taking grandiose claims in papers with p-value 0.045 and subject them to an independent replication. If they then have an idea of how to "marginally" extend the analysis or experiment even better. But what we need is more robust research, not even more "just the most grandiose idea gets published" incentives that just lead to a ton of spurious false positives, each of which has the potential to throw a discipline years backwards (think about how social psych has made virtually no systematic progress over the last twenty years thanks to entire research teams dedicating thousands of research hours and hundreds of thousands of research funding on now debunked "big ideas", from positive psychology, over power poses, to virtually every other big idea in psych...)


----------



## MarcelNL (Sep 23, 2020)

I was merely responding to the cartoon Ian posted, which to me came across as dismissive. I agree with your points Ian, so if I was dismissive (and I was), it was about the cartoon. 

That said, repetition of experiments is a key part of science, I was trying to say that I observe (not complain) there also is an abundance of articles written because folks need to write papers, so for want of a new idea or even an new insight to something existing is used, just follow the references in some articles.

VicVox, all true, I do not advocate how scientists are 'managed' as they currently are, there must be a middle ground between giving them a nice place to work and X amount of funding and see what comes out and chase them to death to publish. We live in an illusion of control, we cannot really administrate science forward, it happens, people discover major things while tinkering and messing around.


----------



## ian (Sep 23, 2020)

MarcelNL said:


> I was merely responding to the cartoon Ian posted, which to me came across as dismissive. I agree with your points Ian, so if I was dismissive (and I was), it was about the cartoon.



A cartoon, dismissive? What?


----------



## juice (Sep 23, 2020)

Also needing to be borne in mind is the fact that "science" is a broad church as a word in general usage, covering actual verifiable hard sciences like Ian's and spreading all the way out to areas like nutritional epidemiology (i.e. person belief systems of the researcher) while still being described by that same word, much as "car" covers everything from the Trabant to the Veyron.


----------



## ma_sha1 (Sep 23, 2020)

I agree with some sentiments from a few prior posts. I’ve had collaborations with several MDs on biological research, & published papers with some of them. I was surprised that their experimental designs were full of holes. I realized that they weren’t Trained to do real scientific research. Thank god that the biopharmaceutical medicines are mostly developed by Ph.D.s, the MDs are left to “practice“ medicine, i.e. match symptoms with available prescriptions.


----------



## ian (Sep 23, 2020)

Sick burn, dude. You should probably go see an MD about it.


----------



## VicVox72 (Sep 23, 2020)

ma_sha1 said:


> I’ve had collaborations with several MDs on biological research, & published papers with them. I was surprised that their experimental designs were full of holes. Thank god that the biopharma medicines are mostly developed by Ph.D.s, the MDs are left to “practice“ medicine, i.e. match symptoms with available prescriptions.




They just don't learn much about stats, causal inference, experiment design in school. 

There's this well known anecdote that "even" doctors are very bad at doing Bayesian updating about test results (ie, given prevalence x, false positive rate m, false negative rate k, how likely am I to have disease Y if I get a positive result?)

I find that very unsurprising. In fact, I am sure their training actually makes them worse at it than the average highly educated person. 

Next time your doctor tells you result X means Y, ask them what their best guess is of the probability whether you ACTUALLY have Y. I'd bet it's 50% chance they don't think that's even a meaningful question that they can process, 49% that they look up the % of positive cases that test positive and tell you that, 1% they actually know the correct way to answer. 

I have done this about multiple diagnoses and never gotten the third case. Always been first or second....


----------



## ma_sha1 (Sep 23, 2020)

ian said:


> Sick burn, dude. You should probably go see an MD about it.



I am a puppy dog at Dr. office now, keep my mouth shut, I learned my lesson the hard way. I challenged my doctors a bit too much & one of them fired me, wrote me a letter saying that she can no longer be my doctor


----------



## juice (Sep 23, 2020)

ma_sha1 said:


> the MDs are left to “practice“ medicine, i.e. match symptoms with available prescriptions.


Sadly all too true. What's lifestyle intervention? Can you write a script for that? No, that's not an income generator, and takes way too long.



ma_sha1 said:


> I challenged my doctors a bit too much & one of them fired me, wrote me a letter saying that she can no longer be my doctor


It's for the best, moves you along towards a doctor with a clue, or at least an open mind.


----------



## LostHighway (Sep 23, 2020)

VicVox72 said:


> They just don't learn much about stats, causal inference, experiment design in school.
> 
> There's this well known anecdote that "even" doctors are very bad at doing Bayesian updating about test results (ie, given prevalence x, false positive rate m, false negative rate k, how likely am I to have disease Y if I get a positive result?)
> 
> ...



I think that varies quite a bit from medical school to medical school. A prior (different geographic location) primary care physician of mine was one of the rare ones that actually keeps up on the literature and reads it with a critical eye. He regularly complained about the general quality of published research in his field but singled out pharmacology in particular as the absolute nadir of research quality. The guys in that realm may have PhDs but they mostly work for for profit companies that are far from dispassionate about the fiscal implications of the work of their research divisions. If you believe there is a fire wall dividing research from the money guys perhaps you'd like to buy a bridge?



juice said:


> Also needing to be borne in mind is the fact that "science" is a broad church as a word in general usage, covering actual verifiable hard sciences like Ian's and spreading all the way out to areas like nutritional epidemiology (i.e. person belief systems of the researcher) while still being described by that same word, much as "car" covers everything from the Trabant to the Veyron.



I have to take some issue with this. Certainly pop nutrition fads and a small number of people with some semblance of credentials who feed this nonsense should be called out but there is quite a bit of legit work being done in nutritional epidemiology. IME the social sciences taken broadly have much lower research standards but as in any field there are exceptions that try the rule. In the interest of full disclosure I used to work in research on heavy metal contamination in urban areas with respect to public health and have dealt with a number of epidemiologists including one focused on diet and cancer, Australia educated I might add..


----------



## rmrf (Sep 23, 2020)

VicVox72 said:


> They just don't learn much about stats, causal inference, experiment design in school.
> 
> There's this well known anecdote that "even" doctors are very bad at doing Bayesian updating about test results (ie, given prevalence x, false positive rate m, false negative rate k, how likely am I to have disease Y if I get a positive result?)



Is anyone good at these though? I thought I read somewhere that humans are just bad at probabilities. I know I'm awful so every time I see it, I have to sit down and write it out.



VicVox72 said:


> I find that very unsurprising. In fact, I am sure their training actually makes them worse at it than the average highly educated person.
> 
> Next time your doctor tells you result X means Y, ask them what their best guess is of the probability whether you ACTUALLY have Y. I'd bet it's 50% chance they don't think that's even a meaningful question that they can process, 49% that they look up the % of positive cases that test positive and tell you that, 1% they actually know the correct way to answer.
> 
> I have done this about multiple diagnoses and never gotten the third case. Always been first or second....



Of course it would be better to be accurate, but I wonder how damaging it is. For instance, if doctors always underestimate the severity of diseases that are common and well diagnose-able (low false positive), is that bad? I think I want my doctor to overestimate the severity of uncommon and hard to detect diseases.


----------



## VicVox72 (Sep 23, 2020)

rmrf said:


> Is anyone good at these though? I thought I read somewhere that humans are just bad at probabilities. I know I'm awful so every time I see it, I have to sit down and write it out.
> 
> Of course it would be better to be accurate, but I wonder how damaging it is. For instance, if doctors always underestimate the severity of diseases that are common and well diagnose-able (low false positive), is that bad? I think I want my doctor to overestimate the severity of uncommon and hard to detect diseases.



Nobody is born good at them, but if you can a) get the grades to go to med school b) survive medschool, c) survive a residence+fellowship, you also have the brain power to learn Bayes Rule sufficiently well to not respond "false positive rate is 1% hence you have the disease with 99% probability" 

And a lot of medical expenditure, a lot of suffering, and a handful of severe negative complications are generated from false positives. How many pregnant women had inductions leading to c-sections leading to deaths or debilitation of the mother or child due to doctors not understanding false positives in tests designed to find conditions that suggest an early induction of labor, as one specific example? Now extrapolate to needless biopsies, needless surgery in general, needless additional testing etc etc etc


----------



## juice (Sep 23, 2020)

VicVox72 said:


> And a lot of medical expenditure, a lot of suffering, and a handful of severe negative complications are generated from false positives.


So much this


----------



## rmrf (Sep 23, 2020)

VicVox72 said:


> And a lot of medical expenditure, a lot of suffering, and a handful of severe negative complications are generated from false positives. How many pregnant women had inductions leading to c-sections leading to deaths or debilitation of the mother or child due to doctors not understanding false positives in tests designed to find conditions that suggest an early induction of labor, as one specific example? Now extrapolate to needless biopsies, needless surgery in general, needless additional testing etc etc etc



That's a good point. You've convinced me that properly estimating the probability of a bad outcome given a test is important.

However, medical professionals already have to know and understand so many things. Maybe instead of making "false positive rate" and "false negative rate" and "population of people with disease Y" the standard and easily available features for the doctors to make their predictions, perhaps it would be better to provide them with the end probability that they need. That is, provide "P(covid | positive covid test)" instead of "P(negative covid test | covid)", "P(positive covid test | no covid)", and "P(covid)". I understand that its probably a lot harder than that; there's probably more variables that are important when making this prediction. However, this seems like a great place where technology could help doctors.

You point regarding that doctors shouldn't say "false positive is 1% so you have disease with 99% certainty" is very valid. That is certainly bad. They should at least know it isn't that easy.


----------



## VicVox72 (Sep 23, 2020)

Yes, I think one of the most puzzling features of the already very, uh,.. puzzling, health care sector is the inability or unwillingness to adopt modern information systems to make doctors work easier. Eg by providing the relevant probabilities.

Most of the time it wouldn't even be very hard -- they already have to chart your info anyway, the tests are inside a electronic data system, all the algorithm needs to do is combine the charted data with the test result and reliability data for different populations, and spit out the best guess!


----------



## rmrf (Sep 23, 2020)

Wow, that would be really exciting!


----------



## nakiriknaifuwaifu (Sep 23, 2020)

rmrf said:


> You point regarding that doctors shouldn't say "false positive is 1% so you have disease with 99% certainty" is very valid. That is certainly bad. They should at least know it isn't that easy.



You mean that's not how it works??? 

Now, the constant sh*t-talking of doctors rubs me the wrong way (especially when followed with something about nurses or NPs being superior), but we have yet to learn of Bayes Rule.

Also @ian The timescales required in translational research for animals to live, grow, and die is often such that the more rigorous a study is, the more time and resources it will take. I'm unclear if there are similar principles in mathematics research.


----------



## ian (Sep 23, 2020)

nakiriknaifuwaifu said:


> Also @ian The timescales required in translational research for animals to live, grow, and die is often such that the more rigorous a study is, the more time and resources it will take. I'm unclear if there are similar principles in mathematics research.



 Harder problems usually take more time to solve, but the funds you need to buy stationery (and the occasional laptop) do not depend on the problem. Applied mathematicians may have a different answer though.


----------



## Nemo (Sep 24, 2020)

VicVox72 said:


> Yes, I think one of the most puzzling features of the already very, uh,.. puzzling, health care sector is the inability or unwillingness to adopt modern information systems to make doctors work easier. Eg by providing the relevant probabilities.
> 
> Most of the time it wouldn't even be very hard -- they already have to chart your info anyway, the tests are inside a electronic data system, all the algorithm needs to do is combine the charted data with the test result and reliability data for different populations, and spit out the best guess!


These are available in many situations. For example, there are a number of tools available to calculate the likelihood of death or of failure return to independent life for patients having certain types of surgery. For frail people or people with significant comorbidities, the numbers are often much worse than you would think.


----------



## Nemo (Sep 24, 2020)

rmrf said:


> That's a good point. You've convinced me that properly estimating the probability of a bad outcome given a test is important.
> 
> However, medical professionals already have to know and understand so many things. Maybe instead of making "false positive rate" and "false negative rate" and "population of people with disease Y" the standard and easily available features for the doctors to make their predictions, perhaps it would be better to provide them with the end probability that they need. That is, provide "P(covid | positive covid test)" instead of "P(negative covid test | covid)", "P(positive covid test | no covid)", and "P(covid)". I understand that its probably a lot harder than that; there's probably more variables that are important when making this prediction.



You are kind of describing the difference between sensitivity/ specificity of a test and positive/ negative predictive value of a test result.

Sensitivity: What % of actual positives will the test detect.
Specificity: What % of actual negatives will the test say is negative.

Positive predictive value: Given a positive test, what is the likelihood that the patient is actually positive?
Negative predictive value: Given a negative test, what is the likelihood that the patient is actually negative?

Sensitivity and specificity are characteristics of the test. Predictive value depends on the prevalence of the disease as well as the sensitivity and specificity.

For example, the CV19 test is around 65% sensitive if performed correctly. If there is no known CV19 in a community, a negative result is very unlikely to represent an actual case (it will likely be a true negative). The same result in a CV19 hotspot is not as reassuring (it's significantly more likely to be a false negative), even if exactly the same test was used as in th CV19 free zone.


----------



## rmrf (Sep 24, 2020)

Nemo said:


> You are kind of describing the difference between sensitivity/ specificity of a test and positive/ negative predictive value of a test result.
> 
> Sensitivity: What % of actual positives will the test detect.
> Specificity: What % of actual negatives will the test say is negative.
> ...



Making sure I understand, in my notation earlier, 
sensitivity = P(test positive | c19),
specificity = P(test negative | no c19),
positive predictive value = P(c19 | test positive),
negative predictive value = P(no c19 | test negative)
prevalence of covid = P(c19)?

If so, I think we're exactly talking about the same thing. In my crappy notation,
positive predictive value = P(c19 | test positive) 
= P(test positive | c19) P(c19) / P(test positive) 
= P(test pos | c19) P(c19) / [P(test pos | c19)P(c19) + P(test pos | no c19) P(no c19)] 
= sensitivity * prevalence of disease / [sensitivity * prevalence of disease + (1-specificity) * (1-prevalence)]




Nemo said:


> For example, the CV19 test is around 65% sensitive if performed correctly. If there is no known CV19 in a community, a negative result is very unlikely to represent an actual case (it will likely be a true negative). The same result in a CV19 hotspot is not as reassuring (it's significantly more likely to be a false negative), even if exactly the same test was used as in the CV19 free zone.


P(c19 | test neg) = 1-P(no c19 | test neg) 
= 1-P(test neg|no c19)P(no c19)/ [P(test neg | c19) P(c19) + P(test neg | no c19)P(no c19)]
= 1 - specificity P(no c19) / [ (1-sensitivity) P(c19) + specificity P(no c19)]

If P(c19) << P(no c19),
P(c19 | test neg) < 1 - spec P(no c19) / spec P(no c19) = 0 

If P(c19) >> P(no c19),
P(c19 | test neg) < 1- spec P(no c19) / (1-sens) P(c19) which is large when spec/(1-sens) ~ 1.

Great! My bad algebra matches common sense. 

Apologies if this was painfully obvious (or just wrong...). I probably learned it in school, but I never thought I would use it. CV19 sure showed me! Instructors are going to have a field day in the future if students ever argue that they don't need this stuff in their daily lives.


----------



## Geigs (Sep 24, 2020)

Nature Pub on a highly sensitive/specific COVID-19 serology test. Best data I've seen.









Ultrasensitive high-resolution profiling of early seroconversion in patients with COVID-19 - Nature Biomedical Engineering


A multiplexed fluorescence-based assay detects seroconversion in individuals infected with SARS-CoV-2 from less than 1â€‰Âµl of blood as early as the day of the first positive nasopharyngeal nucleic acid test after symptom onset.




www.nature.com





Also, a brief comment on biological research in general- it is incredibly hard to review, as much of it is novel and you can't say for sure whether the data are real until it's repeated (or not). I've seen some high profile papers fall apart and some big names fall due to dodgy work by students/postdocs. I went to the lab of a Guru in my field to learn some techniques they published in Nature. The author only spoke Mandarin, his notes were all in Mandarin, and it turns out he faked the whole thing, but this didnt come out for YEARS as the studies took that long to try to reproduce. 

Science is cut-throat. Too many snouts in too small a trough, too little oversight to catch all the cheats.


----------



## Nemo (Sep 24, 2020)

rmrf said:


> Making sure I understand, in my notation earlier,
> sensitivity = P(test positive | c19),
> specificity = P(test negative | no c19),
> positive predictive value = P(c19 | test positive),
> ...


I think you have written correct mathematical formulae for the terms, although I'm not a mathematician. I'm definitely not gonna mark your algebra- it's making my head hurt! . @ian- help!

But it does look like you've reached the correct conclusions.

In medicine, the PPV and NPV are the most important values because the info you get (and have to act on) is a test result.

Interestingly, a test is most useful when there is a lot of uncertainty in the diagnosis of a condition.

If you have clinical information (symptoms, examination findings, local prevalence data, etc) that suggeststs that the patient doesn't have the disease, but the test is positive, you need a high specificity test to discount that clinical information (and accept that the test isn't a false positive). The clinical information effectively lowers the PPV of the test.

Likewise, If your clinical information suggests that the patient does have the disease but the test is negative, you need a high sesnitivity test to discount that clinical information (and accept that the test isn't a false negative). The clinical information effectively lowers the NPV of the test.

It's really when you are completely unsure about whether the patient has a disease or not where a test (especially a low sensitivity and/ or specificity test) is most useful.

Note that in most testing systems, there is a tradeoff between sensitivity and specificity. This tradeoff can be modified by moving the detection thresholds. A higher detection threshold will improve specificity but reduce sensitivity.

I just remembered- biostatistics makes my head hurt more than algebra does.


----------



## ian (Sep 24, 2020)

Hah, yea, looks generally correct, although I didn’t look at all of it. Good point about using this in an intro probability class.

All these terms, agh! When you were originally talking about false positive rate, I assumed you meant the rate (which I took to mean percentage) of positive tests that are false, which made the initial discussion quite confusing...


----------



## rmrf (Sep 24, 2020)

Nemo said:


> But it does look like you've reached the correct conclusions.


Success! Its easier to get to the right answer if you already know the right answer!



Nemo said:


> I just remembered- biostatistics makes my head hurt more than algebra does.


Prob/stats always gave me headaches. Only worse thing I tried was combinatorics. Counting is hard when I run out of fingers.


----------



## Midsummer (Sep 24, 2020)

VicVox72 said:


> Nobody is born good at them, but if you can a) get the grades to go to med school b) survive medschool, c) survive a residence+fellowship, you also have the brain power to learn Bayes Rule sufficiently well to not respond "false positive rate is 1% hence you have the disease with 99% probability"
> 
> And a lot of medical expenditure, a lot of suffering, and a handful of severe negative complications are generated from false positives. How many pregnant women had inductions leading to c-sections leading to deaths or debilitation of the mother or child due to doctors not understanding false positives in tests designed to find conditions that suggest an early induction of labor, as one specific example? Now extrapolate to needless biopsies, needless surgery in general, needless additional testing etc etc etc



Let me approach this in a way that educates and does not alienate. 

I am all for enhanced education. My sister and my mother both PhD's in industrial psychology had much more formal training in statistics than I had. We had discussions such as this on a regular basis. However, If I were to set up a study, I would enlist the help of our statistician friend and not rely solely on my recollections of the 2 semesters of statistics I had as an undergraduate. 

I have cared for women and their pregnancies for 30 years. Beyond medical school and residency, my education did not stop. 

Your statement " how many pregnant women....early induction" I find offensive, uninformed and highly biased. I imagine that within certain social echo chambers that people float through daily, that this statement goes unchallenged. But where is your academic rigor? This sounds like the rant of an uneducated lay midwife not some academic imbued with profound insights into statistical methods and a thirst for the truth.

The truth is that I and the vast majority of my colleagues in obstetrics understand, with more acuity than you could even imagine, the limitations of the tests we do and the profound consequences of our actions. To have life in your hands is humbling. To suggest that we are so callus as to not recognize the suffering of our patients is elitist, dehumanizing, bigoted, insensitive and plain wrong. 

I think that went well....Beers..
Hopefully that small point is a little clearer.


----------



## VicVox72 (Sep 24, 2020)

Look, I don't want to throw shade on your medical ethics  I am sure you are a great doctor. 

But when my wife's yoga group has 5 inductions because of "the baby is too big" and all 5 turn out exactly at 50th percentile (with 3 turning into an emergency C-section) and the AMA recommendation is that you do not use weight scans as a criterion for inductions outside of extreme cases, I have to either question the ethics or the medical sophistication of the physicians that these poor women saw. 

Also, I read the medical literature on these topics and the AMA recommendation on many issues does not match what actual statistical evidence can support. And then we end up with induction and c section rates a dozen times higher than WHO says should be the case.


----------



## Luftmensch (Sep 29, 2020)

Since @VicVox72 mentioned Bayesian statistics



Nemo said:


> You are kind of describing the difference between sensitivity/ specificity of a test and positive/ negative predictive value of a test result.



I understand the utility of sensitivity and specificity in a medical context but I find they make the math confusing. Which is to say, I would stop here:



rmrf said:


> P(c19 | test positive)
> = P(test positive | c19) P(c19) / P(test positive)



@rmrf has wittingly, or unwittingly, transcribed the examples in Wikipedia . So yes! The algebra checks out. While it is interesting you can derive the positive predictive value using Bayes rule, i think it confuses what it going on.

The utility of Bayes rule is that you can update the 'belief' in a proposition based on new knowledge. The canonical form is:



> P(A|B) = P(B|A)P(A) / P(B)



The value on the left-hand side of the equation, P(A|B), is called the _posterior_. It represents the 'belief' we have in the hypothesis 'A' after accounting for the evidence 'B'. The 'belief' P(A), before knowing anything about the evidence 'B', is known as the _prior_. The important part is to recognise the posterior is a function of the evidence _and_ the prior. The cool thing about Bayes' rule is that it is recursive. The posterior can act as the new prior for the next round of observations. To bring this beautifully full-circled:



Michi said:


> this episode shows that the scientific method and peer review process are, in the long run, self-correcting and effective.



You can see the analogue between the scientific method and Bayesian statistics: update your belief when new evidence is available 


Moving back to the SARS-CoV-2 example; substitute 'A' with 'c19' and 'B' with 'test positive' into the canonical formula above and it is identical to what @rmrf wrote. Despite more confusing syntax, the same simple principle applies. Bayes' rule allows you to calculate the probability of a person having SARS-CoV-2 given the other prior knowledge (background prevalence, contact) AND a positive test result (new evidence).

A small weakness of Bayesian methods is that they rely on assumptions. For example, how do you reasonably determine the prior? I don't view this criticism as a major flaw in the method. Modelling is littered with assumptions. To me, the more interesting 'weakness' is computational complexity - in particular the value P(B), which is known as the model evidence. It can only be evaluated analytically (fast) for a limited set of probability distributions. In many complex models, the value has to be approximated numerically (slow). Whether this is really a disadvantage depends on your application and whether or not you can avoid the issue with a more intelligent model construction.


Finally, people _are_ naturally bad a statistics. Even when trained, they can be crappy. It is difficult and it can be counter intuitive. To illustrate the point, riddle me this:



> Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?



It is a famous statistics puzzle: the Monty Hall Problem. Before clicking on the Wikipedia link, calculate the odds of winning the car if you switch


----------



## ian (Sep 29, 2020)

All the statistics words people use confuse the hell out of me. Bayesian blah blah . But then when someone says "P(A|B) = P(B|A)P(A) / P(B)", I'm like "duh... why does that simple equation even have a name? you can prove it in like half a sentence". Guess this is what math training does to you. Makes simple things confusing and difficult things simple.


----------



## rmrf (Sep 29, 2020)

I never remember P(B|A)P(A) / P(B) = P(A|B). I always have to go from P(A,B) = P(A|B)P(B) = P(B|A)P(A). Just adding that little "P(A,B)" made it make a lot more sense to me.



Luftmensch said:


> @rmrf has wittingly, or unwittingly, transcribed the examples in Wikipedia . So yes! The algebra checks out. While it is interesting you can derive the positive predictive value using Bayes rule, i think it confuses what it going on.


Horray for unintended plagorism! "Great minds think alike but fools rarely differ." 

If I remember my goal in the earlier post correctly, I was trying to derive the positive predictive value to see how different it was from the false positive test rate and then bound the error of reporting the false positive test rate vs reporting the positive predictive value. But I got lazy somewhere. I blame the lack of tex support!




Luftmensch said:


> A small weakness of Bayesian methods is that they rely on assumptions. For example, how do you reasonably determine the prior? I don't view this criticism as a major flaw in the method. Modelling is littered with assumptions. To me, the more interesting 'weakness' is computational complexity - in particular the value P(B), which is known as the model evidence. It can only be evaluated analytically (fast) for a limited set of probability distributions. In many complex models, the value has to be approximated numerically (slow). Whether this is really a disadvantage depends on your application and whether or not you can avoid the issue with a more intelligent model construction.


I agree that its not a huge problem. The assumption would be there anyways, just not explicit. Maybe if you want to hide the assumptions to manipulate the model, it is a weakness 

I'm not clicking on that monte(monty?) hall problem link. I'll un-convince myself of the right answer and need to spend an hour re-convincing myself.


----------



## Luftmensch (Sep 29, 2020)

ian said:


> All the statistics words people use confuse the hell out of me.



 as opposed to _surjections_ this... or _homeomorphism_ that... not to mention all the _Heegaard splittings_! 




rmrf said:


> Horray for unintended plagorism! "Great minds think alike but fools rarely differ."
> 
> If I remember my goal in the earlier post correctly, I was trying to derive the positive predictive value to see how different it was from the false positive test rate and then bound the error of reporting the false positive test rate vs reporting the positive predictive value. But I got lazy somewhere. I blame the lack of tex support!





All the jargon used in diagnostic testing confuses the **** out of me... My eyes usually glaze over at that point. So it was actually really cool to see the positive predictive value be related to sensitivity and specificity. Thanks for that! Hehe... who would have thought? KKF needing TeX support! 




rmrf said:


> monte(monty?)



Doh! Indeed. Thanks for that catch... I'll correct my post!


----------



## ian (Sep 29, 2020)

Luftmensch said:


> as opposed to _surjections_ this... or _homeomorphism_ that... not to mention all the _Heegaard splittings_!



What are you talking about? Everyone knows what those are.


----------



## ma_sha1 (Sep 29, 2020)

With all the math being thrown around here, I’d like to try my luck on a question that I’ve never gotten an answer for.

Once upon a time, a math dude Mathematically proved that 1+1=2, he became a national sensation & the King nerd bachelor, had girls writing love letters to him from all over, eventually married one of them. He made math cool.

I thought 1+1 = 2 is just a rule, an assumption where all other math assumptions were built upon, thus, it can not be proven by definition.

Does anyone know the story?
Can anyone explain how to prove 1+1=2?


----------



## Michi (Sep 29, 2020)

ian said:


> What are you talking about? Everyone knows what those are.


Sure, but @Luftmensch likes his sesquipedalianisms


----------



## rmrf (Sep 29, 2020)

ma_sha1 said:


> With all the math being thrown around here, I’d like to try my luck on a question that I’ve never gotten an answer for.
> 
> Once upon a time, a math dude Mathematically proved that 1+1=2, he became a national sensation & the King nerd bachelor, had girls writing love letters to him from all over, eventually married one of them. He made math cool.
> 
> ...


I'm no mathematician, but I think it depends on what axioms (initial assumptions) you take. I remember the popular math majors proving 1+1=2 as a party trick. I think most of them used peano arithmetic. I don't know if its the fastest way or not. I don't know why you can't just call the natural numbers as the size of sets, like 1=|{a}| and 2=|{b,c}| and addition as the union of those sets and just count. There's probably something bad about needing to draw unique elements... I don't know. I wasn't cool enough to be at those parties. 



ian said:


> What are you talking about? Everyone knows what those are.


Injections and surjections I can handle. Its the "one to one function" that I disliked. Why isn't "one to one" a bijection? Also, I hope you have better insight to the prev question.


----------



## ian (Sep 29, 2020)

ma_sha1 said:


> With all the math being thrown around here, I’d like to try my luck on a question that I’ve never gotten an answer for.
> 
> Once upon a time, a math dude Mathematically proved that 1+1=2, he became a national sensation & the King nerd bachelor, had girls writing love letters to him from all over, eventually married one of them. He made math cool.
> 
> ...





rmrf said:


> I'm no mathematician, but I think it depends on what axioms (initial assumptions) you take.



That’s right. One of the early pursuits in the 20th century was to try to formalize the logical foundations of math. This involves finding a very small set of “axioms” that you declare are self evident, and then building up all the rest of math just using those axioms. It’s important to keep the list of axioms small to avoid the possibility that some of the axioms contradict each other. So for instance, you shouldn’t have as your axiom system “there’s a number called 1, there’s a number called 2, there’s a number called 3, ..., and these are all the possible ways that addition and multiplication behave”. Modern math usually takes as axiomatic a set of rules for how “sets” behave. For instance, sets can be “elements” of other sets, and there are familiar ways to combine sets (intersection/union). From this you start to understand sets intuitively as groups of objects, although formally they’re just unknowns that behave according to the axioms. You then define what numbers are in terms of sets, and you define addition just in terms of set theoretic operations. So, you’ll have to define what 1 is, and what 2 is, and what arithmetic is, all in terms of operations on sets, and when you do that it’ll follow that you need to prove things like 1+1=2. Depending on how you set things up, this statement could be obvious, or not.

It’s worth mentioning that it’s only a very small subset of mathematicians that think about this stuff. Most of us do more complicated math that just assumes all this stuff works out. But it’s important that it _has_ been worked out.

An early attempt to formalize the logical foundation of arithmetic (not in terms of set theory, I think(?), but some other way) was this text. I think that’s probably what @ma_sha1 was referring to.


----------



## juice (Sep 29, 2020)

Heh, just listening to a podcast, with the main interview described thus...



> Duke talks with Moon Duchin, a mathematician and professor at Tufts University, about her research into understanding how voting districts work. Through redistricting analysis at the Metric Geometry and Gerrymandering Group, an organization Duchin co-founded, she describes how in this democracy, fairness isn’t always as easy to find when you’re considering where people live and vote.


Made me think of this conversation. I just love the name of her group.


----------



## ian (Sep 29, 2020)

Hah, Moon’s a friend of mine. Glad that she’s getting the popular exposure.


----------



## Luftmensch (Sep 29, 2020)

Michi said:


> Sure, but @Luftmensch likes his sesquipedalianisms



Best word for the practice... Forget the mathematicians. Linguists can be cool too


----------



## Luftmensch (Sep 29, 2020)

ian said:


> That’s right. One of the early pursuits in the 20th century was to try to formalize the logical foundations of math. This involves finding a very small set of “axioms” that you declare are self evident, and then building up all the rest of math just using those axioms. It’s important to keep the list of axioms small to avoid the possibility that some of the axioms contradict each other. So for instance, you shouldn’t have as your axiom system “there’s a number called 1, there’s a number called 2, there’s a number called 3, ..., and these are all the possible ways that addition and multiplication behave”. Modern math usually takes as axiomatic a set of rules for how “sets” behave. For instance, sets can be “elements” of other sets, and there are familiar ways to combine sets (intersection/union). From this you start to understand sets intuitively as groups of objects, although formally they’re just unknowns that behave according to the axioms. You then define what numbers are in terms of sets, and you define addition just in terms of set theoretic operations. So, you’ll have to define what 1 is, and what 2 is, and what arithmetic is, all in terms of operations on sets, and when you do that it’ll follow that you need to prove things like 1+1=2. Depending on how you set things up, this statement could be obvious, or not.
> 
> It’s worth mentioning that it’s only a very small subset of mathematicians that think about this stuff. Most of us do more complicated math that just assumes all this stuff works out. But it’s important that it _has_ been worked out.
> 
> An early attempt to formalize the logical foundation of arithmetic (not in terms of set theory, I think(?), but some other way) was this text. I think that’s probably what @ma_sha1 was referring to.



Reminds me of a subtle joke in Futurama. The episode where the Brains plan to collect all information in the universe and store it in an Infosphere. The video aught to load at the timestamp where the Infosphere is finishing its knowledge acquisition:


----------



## ma_sha1 (Sep 30, 2020)

``


ian said:


> That’s right. One of the early pursuits in the 20th century was to try to formalize the logical foundations of math. This involves finding a very small set of “axioms” that you declare are self evident, and then building up all the rest of math just using those axioms. It’s important to keep the list of axioms small to avoid the possibility that some of the axioms contradict each other. So for instance, you shouldn’t have as your axiom system “there’s a number called 1, there’s a number called 2, there’s a number called 3, ..., and these are all the possible ways that addition and multiplication behave”. Modern math usually takes as axiomatic a set of rules for how “sets” behave. For instance, sets can be “elements” of other sets, and there are familiar ways to combine sets (intersection/union). From this you start to understand sets intuitively as groups of objects, although formally they’re just unknowns that behave according to the axioms. You then define what numbers are in terms of sets, and you define addition just in terms of set theoretic operations. So, you’ll have to define what 1 is, and what 2 is, and what arithmetic is, all in terms of operations on sets, and when you do that it’ll follow that you need to prove things like 1+1=2. Depending on how you set things up, this statement could be obvious, or not.
> 
> It’s worth mentioning that it’s only a very small subset of mathematicians that think about this stuff. Most of us do more complicated math that just assumes all this stuff works out. But it’s important that it _has_ been worked out.
> 
> An early attempt to formalize the logical foundation of arithmetic (not in terms of set theory, I think(?), but some other way) was this text. I think that’s probably what @ma_sha1 was referring to.



Than Ian, very good explanations!

When you have to “define what 1 is, and what 2 is”, won’t you have to say 2 is by duplicating 1? How else could you define 2?

Sine you need to put 2 into axioms as part of definition, the relationship to 1 is already defined by the rules, it can’t be “proven” again by mathematical deduction. 

BTW, I was referring to this guy:








Chen Jingrun - Wikipedia







en.m.wikipedia.org





He was all over Chinese news for proving 1+1=2, & Media taught kids be nerdy like him & girls will be all over you , totally opposite of the US. 

Now looks like he wasn’t even the one who did it? My whole life has been a lie.


----------



## juice (Sep 30, 2020)

ma_sha1 said:


> My whole life has been a lie.


I think we've found the basis for your blade shenanigans...


----------



## ian (Sep 30, 2020)

ma_sha1 said:


> When you have to “define what 1 is, and what 2 is”, won’t you have to say 2 is by duplicating 1? How else could you define 2?



You’re operating too informally here. The point of all of this is the very rigorous precise definitions. That said, you’re not so wrong, in that depending on the axiom system sometimes 2 is defined as the ‘successor‘ of 1, in a way that makes it kind of obvious that 1+1=2 if you go through all the definitions of everything. But lots of obvious looking statements require proof, like the statement that “a+b=b+a”. That’s not completely obvious from the way the definitions are often set up. Also, in the Principia Mathematica that I previously linked, it seems like there’s some actual content to 1+1=2, so maybe they’re setting things up in a way where that’s not immediate. Haven’t read it.



ma_sha1 said:


> ``.
> Now looks like he wasn’t even the one who did it? My whole life has been a lie.



Yea, looks like he was a number theorist, not a logician.


----------

