I want to be self-sufficient enough to “own” my machine (i.e. to be able to repair it myself). OTOH, I see lots of room for user error judging from the service manual steps with difficult measurements. It seems I have a good chance of making it worse by adjusting something that doesn’t need it. I doubt I’ll be able to find a pro who I can watch over the shoulder on in my area, so the risk of making it worse might be my best option.
The shuttle hook is a bit higher than the needle eye when they cross. This seems necessary for this machine so the hook can grab the thread. If I understand the service manual correctly, the shuttle hook should be ~⅛ inch above the needle eye. I can only eye-ball it because it’s quite hard to measure.
I don’t see an early/late difference between left and right. Left right and center appear the same as far as I can tell.
The presser foot was loose so I hope that explains how I broke a needle.
When manually cranking a straight stitch, it seems to miss ~1 in 10 stitches. When manually cranking the widest zig-zag, the left misses most of the time. Probably ~7-9 out of 10 stitches miss on the left, while on the right probably 1 out of 10 miss. So without fabric and with the machine on its side, it appears to work correctly most of the time on the left and right. That is, it grabs the top thread, drags it in a circle and then it wraps the lower thread. Effectively each cycle adds a twist to the threads. It seems to be a hindenbug (the kind of bug that hides when you’re looking closely at it). I could really use a tiny camera to record what happens underneath when sewing fabric.
Before today I’ve been inserting the bobbin so it spins counter-clockwise (a Youtuber said that’s the correct way) but I just noticed the manual shows a clockwise insertion. So I switched to clockwise. I’ve noticed no difference in that regard at least when hand cranking.
I just went to a street market and lifted¹ a dozen or so machines. I bought the heaviest one figuring it has the fewest plastic parts. It’s a Singer from the late '60’s. Front-load bobbin. Found the user manual and service manual online.
The one job I’m uncertain of is whether it can hem denim. The user manual says it can sew denim with a size 16 needle and canvas with a size 18 needle, but if I’m hemming denim then I think it must be able to penetrate 6 layers of denim (where the vertical seam gets folded). The manual makes no mention of layers. Motor is 85 watts.
I realized well after buying that there is no “arm”. I suspect that will be quite inconvenient with hemming in my case because I was planning to do non-destructive technique: undo the original hem and without cutting fold it up as high up the leg as needed. That will be a pain when the cuff can’t wrap around the base.
It’s a big database. It would be a poor design to replicate a db of all links in every single client.
Synchronization of the db would not be cheap. When Bob says link X has anti-feature Y, that information must then be shared with 10s of thousands of other users.
Perhaps you have a more absolute idea of centralized. With Mastodon votes, they are centralized on each node but of course overall that’s actually decentralized. My bad. I probably shouldn’t have said centralized. I meant more centralized than a client-by-client basis. It’d be early to pin those details down at this point other than to say it’s crazy for each client to maintain a separate copy of that DB.
And how would guarantee the integrity of the ones holding the metrics?
The server is much better equipped than the user for that. The guarantee would be the same guarantee that you have with Mastodon votes. Good enough to be fit for purpose. For any given Mastodon poll everyone sees a subset of votes. But that’s fine. Perfection is not critical here. You wouldn’t want it to decide a general election, but you don’t need that level of integrity.
A lot less effort than having to deal with the different “features” that each website admin decides to run on their own.
That doesn’t make sense. Either one person upgrades their Lemmy server, or thousands of people have to install, configure, and maintain a dozen different browser plugins ported to a variety of different browsers (nearly impossible enough to call impossible). Then every Lemmy client also has to replicate that complexity.
I mean, does archive.org get away with it, though?
They get blocked by some sites, and some sites have pro-actively opt-out. archive.org respects the opt-outs. AFAICT, archive.org gets away w/archiving non-optout cases where their bot was permitted.
And do I really have to spell out how Google gets away with caching stuff?
You might need to explain why 12ft.io gets away with sharing google’s cache, as Lemmy could theoretically operate the same way.
I’m extremely skeptical fair use could be twisted to our defense in this particular case.
When you say “twisted”, do you mean commentary is not a standard accepted and well-known fair use scenario?
You don’t know what a logical fallacy is. Bob and Alice can disagree about whether the pizza tastes good or bad. There’s no fallacy there, just subjective disagreement.
The browser (more appropriately named: client) indeed needs some of the logic here, but it cannot do the full job I’ve outlined. The metrics need to be centralized. And specifically when you say browser, this imposes an inefficient amount of effort & expertise on the end-user. A dedicated client can make it easy on the user. But it’s an incomplete solution nonetheless.
You’re going to have to name this fallacy you keep talking about because so far you’re not making sense.
Sometimes a paywalled source is the first to report on something. Calling that link a bad link is nonsense.
One man’s bad link is another man’s good link. It’s nonsense to prescribe for everyone one definition of “bad”. What’s bad weather? Rain? I love rain. Stop trying to speak for everyone and impose your idea of “bad” on people.
Many people don’t know all the websites to redirect things through without that, so calling their contribution “bad” just because they posted that link isn’t the greatest.
So because someone might not know their link is bad, it ceases to be bad? Nonsense.
It’s not even like it’s that big an issue, because usually someone else comes along that provides an alt link in the replies,
(emphasis mine) Usually that does not happen.
so saying that this is a social failure is also ridiculous, because both were provided between two people.
This based on the false premise that usually bad links are supplemented by an alternate from someone else.
Also, the notion that you or anyone else is socially filtering non-misinformation news sources from the rest of us, because you don’t see the value in it, or cannot figure out how to bypass the paywall yourself, isn’t all that great either.
(emphasis mine) Every user can define an enshitified site how they want. If you like paywalls, why not have your user-side config give you a personalized favorable presentation of such links?
Just like Greenpeace paves the way for smaller activist groups that can’t stand up to challenges, archive.org would serve in the same way. When archive.org (with ALA backing) wins a case, that’s a win for everyone who would do the same. Lemmy would obviously stay behind on the path archive.org paves and not try to lead.
A link is not a bad link for going to the source. You’ve misunderstood the post and also failed to identify a logical fallacy (even had your understanding been correct).
Whether the link goes to the source or not is irrelevant. I’m calling it a bad link if it goes to a place that’s either enshitified and/or where the content is unreachable (source or not). This is more elaborate than what you’re used to. There’s more than a dozen variables that can make a link bad. Sometimes the mirror is worse than the source (e.g. archive*ph, which is a Cloudflared mirror site).
It would need some analysis by legal experts. But consider that archive.org gets away with it. Although archive.org has an opt-out mechanism. So perhaps each Lemmy instance should have an opt-out mechanism, which should push a CAPTCHA in perhaps one of few good uses for CAPTCHAs. Then if Quora wants to opt-out, they have to visit every Lemmy instance, complete the opt-out form, and solve the CAPTCHA. Muahaha!
Note as well how 12ft.io works: it serves you Google’s cache of a site (which is actually what the search index uses). How did Google get a right to keep those caches?
There’s also the #fairUse doctrine. You can quote a work if your commenting on it. Which is what we do in the threadiverse. Though not always – so perhaps the caching should be restricted to threads that have comments.