@fossilesque@mander.xyz
@fossilesque@mander.xyz avatar

fossilesque

@fossilesque@mander.xyz

image

A lazy cat in human skin, an eldritch being borne of the '90s.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

fossilesque,
@fossilesque@mander.xyz avatar

Beep boop yer welcome.

fossilesque,
@fossilesque@mander.xyz avatar

Tell us more!

fossilesque,
@fossilesque@mander.xyz avatar

A meme doesn’t have to be funny. ;)

en.wikipedia.org/wiki/Meme?wprov=sfla1

fossilesque,
@fossilesque@mander.xyz avatar

You engaged, didn’t you?

fossilesque, (edited )
@fossilesque@mander.xyz avatar

Nah, replication is enough, shows interest. This meme was stolen, thus it lives on as a vehicle. The intention here is to simply get people interested in the world around them. Not sure what the original goal was. This one just has a little more .jpg than the last. I’ve seen variants of this one, regardless. This is an old one.

www.merriam-webster.com/dictionary/meme

fossilesque, (edited )
@fossilesque@mander.xyz avatar

Listen, sometimes a quickie is fun.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

Aloo-min-i-um makes the thumbs sound like cartoons.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

They try to correct me here and I laugh at them, then they call me an uncivilized yank. And by they I mean my Brit partner, but he grew up in NJ so I’m not sure who he is calling uncivilised.

fossilesque,
@fossilesque@mander.xyz avatar

jojoline (。O ω O。)

www.youtube.com/watch?v=Ixrje2rXLMA___

fossilesque, (edited )
@fossilesque@mander.xyz avatar

Some aspects of mythology or alchemy are also useful, but that doesn’t mean it’s an overall respected science or isn’t caused by a secondary phenomenon. As that wiki states, it’s the suggestion aspect that is useful, not the hypnosis itself (the methodology) and there isn’t really a consensus on its efficacy.

The statement “If it’s useful for anything, then it’s not pseudoscience” is an example of a logical fallacy known as a false dichotomy or a false dilemma. This fallacy occurs when someone presents a situation as if there are only two mutually exclusive options or possibilities when, in fact, there are more potential alternatives or nuances to consider.

In this case, the statement implies that something can either be “useful” or “pseudoscience,” with no middle ground or other possibilities. In reality, an idea or concept can have some utility or practical applications while still being considered pseudoscientific or lacking scientific validity. The two categories are not necessarily mutually exclusive, and this oversimplified dichotomy ignores the complexity of the subject matter.

This is basically part of the joke that this headline implies.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

No, pseudoscience simply consists of statements, beliefs, or practices that claim to be both scientific and factual but are incompatible with the scientific method. It’s more about methodology and subsequent reproducibility, not simply results. There’s an important difference here.

www.merriam-webster.com/dictionary/pseudoscience

Even pseudoscientific fields can produce results that appear to be beneficial or effective; however, these results may not be replicable, may be the result of placebo effects, or other biases.

As the earlier wiki link states: “Criticism of pseudoscience, generally by the scientific community or skeptical organizations, involves critiques of the logical, methodological, or rhetorical bases of the topic in question.”

That “some aspects” in the earlier, previously quoted context is doing a lot of heavy lifting here. Notice the word ‘suggestion’ in place of hypnosis. The following entry is related directly to hypnotherapy in that link. If you look under Efficacy in this next wiki link, nearly all meta studies say there is inconclusive evidence to support this practice as any sort of standalone treatment. en.wikipedia.org/wiki/Hypnotherapy?wprov=sfla1 Partial evidence may hint that it is touching on something(s) we can isolate and apply in a better way.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

This is also a logical fallacy, actually several. False analogy (qualitative vs quantitative) and appeal to authority, namely. There is a practitioner here telling you it’s a placebo (literally a sham medical treatment, that can be useful for secondary effects), wiki classifies it as pseudoscience… Again, even pseudoscientific fields can produce results that appear to be beneficial or effective; however, these results may not be replicable, may be the result of placebo effects, or other biases. No major journal is currently touching this topic as a potential standalone treatment.

I’m not sure what else you want, but I sure hope that you don’t work in the sciences. 😅

Here: …harvard.edu/…/the-power-of-the-placebo-effect “Placebos may make you feel better, but they will not cure you.”

fossilesque, (edited )
@fossilesque@mander.xyz avatar

This isn’t a good journal and the author isn’t an MD. The journal barely has an impact factor. 10 or more is considered very good (extremely reliable). This journal has less than 2; that’s super abysmal. Again, there is a reason major journals (IF of much more than 10) don’t deal with this.

The Impact Factor for a journal is calculated by dividing the number of citations in a year by the total number of articles published in the two previous years. This journal is barely a footnote. For comparison, Nature, one of the best of the best, has an IF of 64.8.

Science is a conversation. This low number means that only one or two articles cited each paper from this entire journal in the last two years, even just in passing. It’s not part of the conversation, and hardly has a seat at the discussion table.

Edit: dyscalculia moment.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

I’m asking you to back yourself with a credible journal. You did not and jumped to anecdote. I’m open to having my mind changed but I want to see actual evidence. This next journal has an impact factor of 2. This is not a great score, especially for medicine. Hell, even Frontiers scores higher. Placebos do work and have utility, by the way, just as the Harvard article I linked said and I’ve repeated over and over. That’s not the issue.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

en.wikipedia.org/wiki/Impact_factor?wprov=sfla1 This is part of how the scientific conversation works, it’s not perfect but good for generalising and mostly reliable. Things that become mainstream parts of the conversation will get more citations, especially as funding will flow those ways, so a lot of the criticisms smooth over. I’m trying to explain how this all works because it’s complicated and valuable to know and very political. Just because someone published something doesn’t make it infallible. There’s really a range of grey because it is a conversation. Having a good journal backing you carries a lot of weight as they rest their reputation on you, multiplying your voice in a way. I like to picture it like a video game multiplier.

PubMed is a search engine for many journals. It’s not one journal.

When you write a paper, you’re not trying to prove something. You’re trying to attack your hypothesis from all angles and disprove it. You want to be wrong because what’s the fun in knowing everything.

fossilesque,
@fossilesque@mander.xyz avatar

That’s ok. It’s good to question things. I realise this stuff is hard. I added an important caveat to how we approach hypotheses. There is actually a lot of writing about how there is too much information to filter these days, even for academics. This is why we rely on things like impact factor. Additionally, anyone can technically publish in a journal but it is hard to get into because of these kinds of politics.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

When you find a paper, Google the name of the journal + “impact factor”, and you should find something. Some journals display their metrics with different scores due to complications with the IF system, so you’ll need to judge those accordingly but they should come up with the same search keywords. There should be a body of literature with higher scores, not just single papers too. Also, look up your authors and see if this is actually something they’re qualified for. This all shows the idea has been established and accepted as part of the mainstream conversation. This is the academic “sniff test.”

The problem with hypnosis isn’t the absence of evidence, it’s the lack of significant effects (efficacy), notably as a standalone treatment. Most sciences measure this with a variant of a p-value. en.wikipedia.org/wiki/P-value?wprov=sfla1 Note that interpretations of p-values are susceptible to placebo effects.

It’s also kind of important that the research is relatively newer because of some metascience trends have changed our understanding of things and we have different standards now.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

Efficacy. It needs to pass through this before it gets to effectiveness testing. Meta studies are important for examining this hence the wiki section mention earlier, which lists a bunch.

www.ncbi.nlm.nih.gov/pmc/articles/PMC3726789/

Note that just being in the conversation doesn’t mean it’s not being cannibalised. Papers or trends may arise that put other researchers in a tizzy. If it’s an accepted practice, you are likely to see a lot of papers fine tuning methods.

The placebo thing shuffles it under their umbrella. There’s a lot of issues there with those.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

You generally got it. ;) The grey areas keep things interesting. Methodology is also important to consider and pick apart more and more considerations of appropriate applications and working contexts. It may be that this practice should be re-categorised rhetorically too, e.g. the language that we use to talk about this subject causes too much confusion as this thread exemplifies.

Lots of things have once been seen as mystical woo, but later had some of the phenomena established with good investigations. From what I have seen, and I’m by no means an expert, that body of literature one would expect for this just isn’t there yet.

Ps: Determining a good IF score will depend on the niche-ness and topic as well but that is why you try not to examine literature in a vacuum of one or two papers. Naturally, those that read more on these specific subjects are the best judges.

fossilesque, (edited )
@fossilesque@mander.xyz avatar

What is and isn’t good science changes with changes in metascience (the science of science); which is also why it’s important to keep current with the literature, especially in today’s world. Philosophy and History of Science are fields that are having an exciting little boom right now with tonnes of great researchers and lay books.

en.wikipedia.org/wiki/History_of_science?wprov=sf…

en.wikipedia.org/…/History_of_science_and_technol…

en.wikipedia.org/wiki/Philosophy_of_science?wprov…

en.wikipedia.org/wiki/Historiography_of_science?w…

en.wikipedia.org/wiki/Metascience?wprov=sfla1

(As an aside, I use wiki a lot for a quick jumping off point as I trust it a bit more after I started editing it; they do try their best and are vigilant and passionate.)

This guy set in motion a lot of current practices of “good science:” en.wikipedia.org/wiki/Karl_Popper?wprov=sfla1

I like this guy from Durham in particular: markrubin.substack.com - he’s got some cool links in the about section, but his stuff is a little technical. Nice dude.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #