@lvxferre@lemmy.ml avatar

lvxferre

@lvxferre@lemmy.ml

This account is being kept for the posterity, but it won’t see further activity past February.

If you want to contact me, I’m at /u/lvxferre@mander.xyz

This profile is from a federated server and may be incomplete. Browse more on the original instance.

lvxferre, (edited )
@lvxferre@lemmy.ml avatar

Neither, but if I must choose it’s probably slightly more like muscle than like cartilage. If prepared properly it’s really soft and a bit chewy, distantly reminding me meat from stews.

(That reminds me a local pub that prepares some fucking amazing breaded and deep-fried tripe. Definitively not doing it at home - it spills and bubbles the oil like crazy.)

How wealthy are those elderly people who hire someone to be with them at all times, instead of moving into a nursing home?

I guess I don’t care how wealthy they are, my question is how much would it cost to hire someone to be your caretaker 24/7 and go with you everywhere you want to go like the grocery store etc

lvxferre,
@lvxferre@lemmy.ml avatar

Don’t feel discouraged by the Karen above, that should’ve stayed in Reddit alongside their peers. Thoughtful contribution is often verbose, and there’s nothing wrong with it.

lvxferre,
@lvxferre@lemmy.ml avatar

Musk being an assumer (note how he’s vomiting certainty on future events) doesn’t surprise me a tiny bit.

lvxferre,
@lvxferre@lemmy.ml avatar

I don’t understand, why are you calling the other poster racist? I’m so confused… everything that he said is true. Source: I’m a gratch.

lvxferre, (edited )
@lvxferre@lemmy.ml avatar

One potential regression that I see is that the current generative models are abandoned, after being ruled as “infringing copyrights” by multiple countries. The tech itself won’t disappear but it’ll be considerably harder to train newer ones.

The most problematic part is however if one of them survives; likely Google. That would lead to a situation as in your second paragraph.

lvxferre,
@lvxferre@lemmy.ml avatar

I looked for it for, like, a hour or so, but couldn’t find the scanned copies. The nearest that I’ve found was the online version of the lexicon, claiming that it contains all six volumes.

lvxferre,
@lvxferre@lemmy.ml avatar

Lunix sucks so much that it got stuck into the version 2 for years.

lvxferre, (edited )
@lvxferre@lemmy.ml avatar

The source that I’ve linked mentions semantic embedding; so does further literature on the internet. However, the operations are still being performed with the vectors resulting from the tokens themselves, with said embedding playing a secondary role.

This is evident for example through excerpts like

The token embeddings map a token ID to a fixed-size vector with some semantic meaning of the tokens. These brings some interesting properties: similar tokens will have a similar embedding (in other words, calculating the cosine similarity between two embeddings will give us a good idea of how similar the tokens are).

Emphasis mine. A similar conclusion (that the LLM is still handling the tokens, not their meaning) can be reached by analysing the hallucinations that your typical LLM bot outputs, and asking why that hallu is there.

What I’m proposing is deeper than that. It’s to use the input tokens (i.e. morphemes) only to retrieve the sememes (units of meaning; further info here) that they’re conveying, then discard the tokens themselves, and perform the operations solely on the sememes. Then for the output you translate the sememes obtained by the transformer into morphemes=tokens again.

I believe that this would have two big benefits:

  1. The amount of data necessary to “train” the LLM will decrease. Perhaps by orders of magnitude.
  2. A major type of hallucination will go away: self-contradiction (for example: states that A exists, then that A doesn’t exist).

And it might be an additional layer, but the whole approach is considerably simpler than what’s being done currently - pretending that the tokens themselves have some intrinsic value, then playing whack-a-mole with situations where the token and the contextually assigned value (by the human using the LLM) differ.

[This could even go deeper, handling a pragmatic layer beyond the tokens/morphemes and the units of meaning/sememes. It would be closer to what @njordomir understood from my other comment, as it would then deal with the intent of the utterance.]

lvxferre,
@lvxferre@lemmy.ml avatar

Not quite. I’m focusing on chatbots like Bard, ChatGPT and the likes, and their technology (LLM, or large language model).

At the core those LLMs work like this: they pick words, split them into “tokens”, and then perform a few operations on those tokens, across multiple layers. But at the end of the day they still work with the words themselves, not with the meaning being encoded by those words.

What I want is an LLM that assigns multiple meanings for those words, and performs the operations above on the meaning itself. In other words the LLM would actually understand you, not just chain words.

lvxferre,
@lvxferre@lemmy.ml avatar

Oh “great”, more crap between Ctrl and Alt.

[Grumpy grandpa] In my times, the space row only had five keys! And we did more than those youngsters do with eight, now nine keys!

lvxferre,
@lvxferre@lemmy.ml avatar

Complexity does not mean sophistication when it comes to AI and never has and to treat it as such is just a forceful way to make your ideas come true without putting in the real effort.

It’s a bit off-topic, but what I really want is a language model that assigns semantic values to the tokens, and handles those values instead of directly working with the tokens themselves. That would be probably far less complex than current state-of-art LLMs, but way more sophisticated, and require far less data for “training”.

lvxferre,
@lvxferre@lemmy.ml avatar

Aaaaah. I really, really wanted to complain about the excessive amount of keys.

(My comment above is partially a joke - don’t take it too seriously. Even if a new key was added it would be a bit more clutter, but not that big of a deal.)

lvxferre, (edited )
@lvxferre@lemmy.ml avatar

Stating obvious shit like it was some hidden piece of wisdom? Inability to handle subtleties like “lying” vs. “saying an incorrect statement”? Voting system? People repeating the same shit over and over, without reading the others’ comments?

EDIT: I’m highlighting that this YT comment section shows a lot of things to hate in Reddit. In some aspects they’re behaving exactly like redditors; in some they’re actually doing it better, even if YT is a cesspool of idiocy.

lvxferre, (edited )
@lvxferre@lemmy.ml avatar

Apparently my method is a mix of those listed in the text.

I’m in a similar situation as OP, some of my income is irregular. So my monthly budget isn’t directly based on the last month income, I use the average of the last six months, relying on a checking account for that. (I keep it with enough money to last me one or two months.)

Then I split that budget into four categories:

  • savings - I aim for 25%. Into the saving account it goes.
  • monthly fixed expenses - periodic, somewhat predictable, monthly. For example bills, cornmeal and rice, cat food, etc.
  • variable expenses - they’re necessities like the above, but there’s some wiggling room. Like, if necessary I don’t mind eating eggs four lunches a week and walking instead of taking a bus, but I’d rather not to. Usually split into four weeks, so I expend it gradually.
  • **"fluff"***¹ - avoidable expenses that I still want for some reason like “it improves my mood”. Things for my hobbies, going to a restaurant, buying nicer clothes or hardware, etc. Unused fluff gets transferred to my savings account in the following month.

Then here’s how I address some complexities:

  • periodic expenses for things that I buy every few months (e.g. gas canisters) - I include a fraction of them into the monthly fixed expenses, and only remove the money from the checking account when buying it
  • erratic but large expenses (e.g. house repairs) - I usually “borrow” this money from the savings, then “repay” it in the following months, as a fixed expense*².
  • high income multiple months in a row - I cap the budget and send the overflow to the savings.
  • low income multiple months in a row - cut down fluff, then reduce variable expenses, then reduce monthly fixed expenses, then reduce savings, in this order.
  • really low income multiple months in a row - if really necessary I borrow from the savings, keeping in mind that I’ll need to repay myself.

Notes:

  1. The actual name that I give to this category is “imposto das lombrigas”, or roughly “roundworm tax”. That’s from from my family jokingly referring to cravings as "to have roundworms for [something].
  2. Some people might use a credit card instead for that, to build credit; that also works, but it depends a lot on the government that you pay taxes to. I do have a credit card but I tend to avoid it, as often there are discounts for paying things in cash.
lvxferre,
@lvxferre@lemmy.ml avatar

Calcium chloride exists, it’s CaCl₂. You need two chloride anions for each calcium cation. [see note*]

It’s safe to eat as long as food grade. In fact it’s used in cheesemaking. It’s salty and bitter. It’s also used to dehydrate stuff in laboratory, since it absorbs water like there’s no tomorrow.

It doesn’t behave like metallic calcium at all. Just like sodium chloride (aka table salt) doesn’t behave like metallic sodium (warning: loud noise).

*Note: technically CaCl (one chlorine) exists, as a diatomic molecule. Rarely found in stars, you won’t find it in Earth.

lvxferre, (edited )
@lvxferre@lemmy.ml avatar

I’m proposing to check their texts out because it’s a good way to get theoretical background to back up your beliefs, if you believe in a peaceful transition. (Here’s a link to a good one, by the way.)

It’s also useful for Marxists, given that Marxism always interacted with other left-wing trains of thought. So by reading this stuff you get a better historical context on why Marxism defends some policies instead of other policies.

lvxferre,
@lvxferre@lemmy.ml avatar

Translation:

  • when you’re walking alone
  • don’t you ever feel
  • like being observed?
  • [God saying] you bloody paranoid
lvxferre,
@lvxferre@lemmy.ml avatar

I’ve seen even people in their 40s using them. I don’t think that it’s a big deal, or that it’s too late for that.

lvxferre, (edited )
@lvxferre@lemmy.ml avatar

I like them, even for software installation. Partially because they’re lazy - it takes almost no effort to write a bash script that will solve a problem like this.

That said a flatpak (like you proposed) would look far more polished, indeed.

lvxferre,
@lvxferre@lemmy.ml avatar

Frankly in this case even a simple bash script would do the trick. Have it check your distro, version, and architecture; if you got curl and stuff like this; then ask you if you want the stable or beta version of the software. Then based on this info it adds Mullvad to your repositories and automatically install it.

lvxferre,
@lvxferre@lemmy.ml avatar

Ah, got it. My bad. Yeah, not providing anything is even lazier, and unlike “lazy” bash scripts it leaves the user clueless.

lvxferre,
@lvxferre@lemmy.ml avatar

The settlement is right at the border of what would be controlled by the Inca government, two millenniums later. It shows that there’s some decent access to the region from the west than you’d be led to believe, with the Andes in the way.

As such, if they find other cities further east, I’m predicting that, culturally speaking, they’ll resemble nothing this one; even if they happen to be roughly the same size.

People ate maize and sweet potato, and probably drank “chicha”, a type of sweet beer.

“If you don’t have chicha, any small thing will do.” (reference to a certain song)

Serious now. Potentially yucca too - it grows right next door, and if they got maize from North America then they likely traded for crops.

lvxferre,
@lvxferre@lemmy.ml avatar

Sorry for such a late reply.

I’m not sure if I’m part of this big exodus or not. I’ve been toying with the idea of migrating this comm for months, as lemmy.ml is focused on open source and privacy while mander.xyz is focused on sciences. It’ll be more discoverable there, it’ll be easier to access it across the Fediverse, and it’ll be easier to be on the same page as the admins when it comes to the rules.

The straw that broke the camel’s back, for me, was not even politics. Or even which sort of content they allow/deny in their instance. It was how they handled another lemmy.ml community; it shows that they’re completely unprepared as a team to handle users in an acceptable way.

lvxferre,
@lvxferre@lemmy.ml avatar

I added a link to the language learning comm in the sidebar of the new address. Thank you for the info!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #