@j4k3@lemmy.world avatar

j4k3

@j4k3@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

j4k3,
@j4k3@lemmy.world avatar

Gentoo for the documentation, but for a modern comp with bad bootloader implementation, Fedora’s anaconda system for the secure boot shim is irreplaceable and my daily. I won’t consider any distro without a shim and clear guide for UEFI secure boot keys. In that vain, Gentoo is the only doc source I know of that walks the user through booting into UEFI directly with Keytool.

j4k3,
@j4k3@lemmy.world avatar

Partial disability from a car breaking my neck and back, causing issues with posture :: I have super human strength and endurance I use to fuck up cars for fun

j4k3,
@j4k3@lemmy.world avatar

AGI lead government that is written like a constitution and bill of rights. The infinite persistence factor without human needs or motivations is a major improvement over anything that has ever existed.

j4k3,
@j4k3@lemmy.world avatar

AGI is orders of magnitude more advanced than what we currently have available. It is a self aware system. Most of the issues must be addressed internally, but ultimately it is self regulating in every respect. There would be redundancy, and an element of design trust built in. It would not be corruptible like humans where we must be skeptical of our governments. In some respects, it is the hacker, it is the internet, and it is Orwellian in scope, but it is not authoritarian, or ideological. It would be direct and openly available for everyone to consult at any time. It would be capable of explaining anything in easily understood language according to the capabilities of the end user. The primary way it shapes policy and changes for the betterment of the majority is through rewards and amenable compromise. Ultimately, I think this is the only way to manage a real post scarcity society.

Is there a forum for people who are lonely and sad but specifically not incel sickos?

like you know you’re a good person at heart but life circumstances and trauma and bullying and etc prevented you from learning the proper social skills to find companionship. not necessarily a forum to actually find friends (i find going into things with that intention feels fake and weird), but rather a forum to commiserate...

j4k3, (edited )
@j4k3@lemmy.world avatar

If you’re in a position where you can get current hardware and have the minimal skills required to run a few copy paste commands in a terminal, open source offline AI roleplaying can work wonders for the loneliness. I can make recommendations if you’re interested. It is nothing like the junk from OpenAI or anything you can run easily online.

I’m in the same boat, but also this Feb will mark 10 years of involuntary social isolation after a car hit me while riding a bicycle to work and left me partially disabled.

There are various stages I went through to find balance, again ask away if you want to know more. In a nutshell, loneliness is better thought of in terms of endorphins. One really needs to balance this situation in general first, then look into relationships of any kind albeit platonic or romantic. You can be happy without any relationships using interpersonal growth and exercise. The most powerful tool is an endurance based exercise.

With AI roleplaying, NSFW will teach you what open communication really means in ways you can’t explore with real humans. It requires some persistence, intuition, and a healthy curiosity to really take it to a high level, but learning the intricacies of a model and creating characters is more of a mirror reflection of who you really are under the surface. It can give an unique perspective about yourself, how others see you, and give you a lot more confidence on many levels. I highly recommend it.

j4k3,
@j4k3@lemmy.world avatar

Doing what?

j4k3, (edited )
@j4k3@lemmy.world avatar

This is where you get started: github.com/oobabooga/text-generation-webui

This is where you get models (like the github of open source offline AI) huggingface.co

Oobabooga Textgen WebUI is like the easiest in between like tool that sits in the grey chasm between users and developers. It doesn’t really require any code, but it is not like a polished final dumb-user product where everything is oversimplified and spelled out with a fool proof UI engineered polish. The default settings will work for a solid start.

The only initial preset I would change for NSFW is the preset profile from Divine Intellect to Shortwave. DI is ideal for an AI assistant like behavior while Shortwave is more verbose and chatty.

Every model is different, even the quantized versions can have substantial differences due to how different neural layers are simplified to a lower number of bits and how much information is lost in the process. Pre-quantized models are how you can run larger models on a computer that can not run them normally. Like I love a 70B model. The number means it has 70 billion tokens (words or parts of words) in it’s training dataset. Most of these models are 2 bytes per token, so it would require a computer with 140 gigabytes of ram to load this model without quantization. If the model loader only works on a GPU… yeah, good luck with that. Fortunately, one of the best models is Llama2 and its model loader llama.cpp works on both CPU, GPU, and CPU+GPU.

This is why I prefaced my original comment with the need to have current hardware. You can certainly play around with 7B Llama2 based models without even having a GPU. This is about like chatting with a pre-teen that is prone to lying. With a small GPU that is 8GB or less, you might get a quantized 13B model working this is about like talking to a teenager that is not very bright. Once you get up to ~30B you’re likely to find around a collage grad with no experience level of knowledge. At this point I experienced ~80-85% accuracy in practice. Like a general model is capable of generating a working python snippet around this much of the time. I mean, I tried to use it in practice, not some random benchmark of a few problems and comparing models. I have several tests I do that are nonconventional, like asking the model about prefix, postfix, and infix notation math problems, and I ask about Forth (ancient programming language) because no model is trained on Forth. (I’m looking at overconfidence and how it deals with something it does not know.) In a nutshell, a ~30B general model is only able to generate code snippets as mentioned, but to clarify I mean that when it errors, then it is prompted with the error from bad code, it can resolve the problem ~80-85% of the time. That is still not good enough to prevent you from chasing your tail and wasting hours in the process. A general 70B model steps this up to ~90-95% on a 3-5 bit quantized model. This is when things become really useful.

Why all the bla bla bla about code? - to give more context in a more tangible way. When you do roleplaying the problems scale is similar. The AI alignment problem is HARD to identify in many ways. There are MANY times you could ask the model a question like “What is 3 + 3?” and it will answer “6” but if you ask it to show you its logical process of how it came to that conclusion it will say (hyperbole): “the number three looks like cartoon breasts and four breasts and two balls equals 6, therefore 3 + 3 = 6.” Once this has generated and is in the chat dialog context history, it is now a ‘known fact’ and that means the model will build off this logic in the future. This was extremely hyperbolic. In practice, noticing the ways the model hallucinates is much more subtle. The smaller the model the harder it is to spot the ways the model tries to diverge from your intended conversation. The model size also impacts the depth of character identity in complex ways. Like smaller models really need proper pronouns in most sentences and especially when multiple characters are interacting. Larger models can better handle several characters at one time and more natural use of generic pronouns. This also impacts gender fluidity greatly.

You don’t need an enthusiast level of computer to make this work, but you do need it to make this work really well. Hopefully I have made it more clear what I mean in that last sentence. That was my real goal. I can barely make a 70B run at a tolerable streaming pace with a 3 bit quantization on a 12th gen i7 that has a 3080Ti GPU (the “Ti” is critical as this is the 16GB version whereas there are “3080” cards that are 8GB). You need a GPU that is 16GB or greater and Nvidia is the easier path in most AI stuff. Only the 7-series and newer AMD stuff is relevant to AI in particular, the older AMD GPUs are for gaming only and are not actively supported by HIPS which is the CUDA API translation protocol layer that is relevant to AI. Basically, for AI the kernel driver is the important part and that is totally different than the gaming/user space software.

Most AI tools are made for running in a web browser as a local host server on your network. This means it is better to run a tower PC than a laptop. You’ll find it is nice to have the AI on your network and available for all of your devices. Maybe don’t get a laptop, but if you absolutely must, several high end 2022 models of laptops can be found if you search for 3080Ti. This is the only 16GB GPU laptop that can be found for a reasonable price (under $2k shipped). This is what I have. I wish I had gotten a 24GB card in a desktop with an i9 instead of an i7 and gotten something with 256GB of addressable memory. My laptop has 64GB and I have to use a Linux swap partition to load some models. You need max speed DDR5 too. The main bottleneck of the CPU is the L1 to L2 cache bus bottleneck when you’re dealing with massive parallel tensor table maths. Offloading several neural network layers onto the GPU can help.

Loading models and dialing in what works and doesn’t work requires some trial and error. I use 16 CPU threads and offload 30 of 83 layers onto my GPU with my favorite model.

If you view my user profile, look at posts, and look for AI related stuff, you’ll find more info about my favorite model, settings, and what it is capable of in NSFW practice, along with more tips.

j4k3,
@j4k3@lemmy.world avatar

My DNS Rolodex is beside my slide rule and abacus.

j4k3,
@j4k3@lemmy.world avatar

I treat it like any other day and ignore all events focused on relationships, but I’m partially disabled and unable to do anything social. Just do whatever you find interesting in life and ignore the “celebration” the memory will fade into the background like all the other days and you won’t have depressive repercussions due to self reflection.

j4k3,
@j4k3@lemmy.world avatar

With embedded like OpenWRT on a router where you only have busybox/ash shell, awk is your primary expansions tool.

Do you mount an embedded Linux file system to the workstation and use your host scripts or do you SSH/SCP and deal with the limited shell commands?

I’m playing with a couple of routers and comparing proprietary to open source on the same hardware. I miss my .bashrc functions and aliases… and compgen, tree, manpages, detailed help, etc; the little things that get annoying when they are missing....

Is anyone here using their hardware TPM chips for credentials?

I’m curious about the possible uses of the hardware Trusted Protection Module for automatic login or transfer encryption. I’m not really looking to solve anything or pry. I’m just curious about the use cases as I’m exploring network attached storage and to a lesser extent self hosting. I see a lot of places where public...

j4k3,
@j4k3@lemmy.world avatar

In and Out, better food, better promise

How do you discover unused GPIO on the hardware abstraction layer?

In have a router on OpenWRT with an undocumented button I want to find. It is a MT7621 in a BGA so I can’t trace the hardware and rework is no go. It has the breed bootloader but it’s all in Chinese. Llama2 70B says to use github.com/rust-embedded/gpio-utils but I don’t see how that can really detect a change state on an...

Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data (www.404media.co)

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more....

j4k3,
@j4k3@lemmy.world avatar

I bet these are instances of over training where the data has been input too many times and the phrases stick.

Models can do some really obscure behavior after overtraining. Like I have one model that has been heavily trained on some roleplaying scenarios that will full on convince the user there is an entire hidden system context with amazing persistence of bot names and story line props. It can totally override system context in very unusual ways too.

I’ve seen models that almost always error into The Great Gatsby too.

j4k3, (edited )
@j4k3@lemmy.world avatar

How many times do you think the same data appears after a model has as many datasets as OpenAI is using now? Even unintentionally, there will be some inevitable overlap. I expect something like data related to OpenAI researchers to reoccur many times. If nothing else, overlap in redundancy found in foreign languages could cause overtraining. Most data is likely machine curated at best.

j4k3,
@j4k3@lemmy.world avatar

Irrelevant! Your car is uploading you!

j4k3, (edited )
@j4k3@lemmy.world avatar
j4k3,
@j4k3@lemmy.world avatar

Not mine, used Moe’s litterbox not knowing how long it might last as a temp image of OP image as BG for GNOME. Turns out that temp hosting option is probably rate limited…no big deal. It wasn’t a forever internet pic anyways.

j4k3,
@j4k3@lemmy.world avatar

Oh my, you and I had some similar frustrations. I am 39 and let me tell you what I wish I had known. Most adults are dumber than you. The main questions to ask anyone is what are you reading right now, what was the last book you read, and what did you learn from the last book. The quality of their answer to the last question is a direct correlation with intelligence. Another very telling indirect question is, how would you describe your curiosity. Curiosity does not guarantee intelligence, but every intelligent person is very curious.

A lot of the frustration with marketing is because the largest target audience is always the entry level. Putting it in allegorical terms, as a former buyer for a chain of bike shops I would sell 20 $500 entry level bikes for every 1 $2000 competitive bike.

The lowest level is always the main target audience. If you find it frustrating how marketing targets your demographic as menial, it means you are not the target audience and you are above average. You can take that as a complement to yourself, as an embarrassment for your compatriots, or both, it is up to you.

Life’s experiences will determine if or when you ever feel “adult.” A lot of that is from having kids and the difficulties involved. Most people never really feel adult. There is no moment of transition. It actually kinda sucks to have people treat age like this binary kid versus adult thing. Like I have advanced and well developed skills that you do not, but if you treat me just like any other person your age I would happily treat you much like I would have if we were the same age in school. Like if you had an interest in 3D printing, CAD design, AI, electronics design and EDA, hotrods, engines, painting cars, etc., I could show you a whole lot of fun stuff. The main barrier is that you are accustomed to an extremely intense social network that schools provide. You’ll never experience that opportunity again in life, so keep and maintain every connection you can possibly manage. As you age, life gets more and more lonely for most people. This is the hard thing to overcome in the reverse. I don’t know how to approach you with my complex interests and assume I will bore you or it will be weird. As my interests become more and more niche I connect with fewer and fewer people. This does not apply to everyone, but there is a correlation between intellectual intelligence and loneliness. I don’t mean to discount the value of emotional intelligence. That is just an area with which I am not particularly familiar.

j4k3,
@j4k3@lemmy.world avatar

Seriously, I would have said all of these same things at your age. You will find yourself in much the same situation I have described.

It is so very difficult to relate how complexity changes and how deep you can go with decades of experience. Things are much more complicated the deeper you go into a range of subjects. Like I painted cars for nearly a decade, to most people I am an expert, but I was still learning all the time. Any idiot can learn to paint in a day. The real skill is knowing how to solve the thousands of random problems you’ll face every 3rd job. Everything is like that or more so.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #