Sorry, I should have been more specific - it’s an application of Markov Chain Monte Carlo. You define a chain and randomly evaluate it until you’re done - is there anything beyond this in WFC?
WFC is a full method of map generation. Monte Carlo is not afaik.
MC is a statistical method, it doesn’t have anything to do with map generation. If you apply it to map generation, you get a “full method of map generation”, and as far as I know that is what WFC is.
To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.
Could you share the paper? Everything I read about WFC is “you have tiles that are stitched together according to rules with a bit of randomness”, which is literally MC.
For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.
Colleagues of mine have also recommended me uninstalling required system packages. Does that mean my colleagues aren’t intelligent/conscious? That humans in general aren’t?
Is it possible that you’re on a different TTY? The login screen used in Fedora has some problems with using the correct TTY if you don’t use auto-login. If this happens again, try cycling through them, maybe your old session is still there.
When all I want to do is read content, no JS is needed.
I didn’t say otherwise.
UX is problematic because now you have these huge PC screens and comparatively tiny mobile screens to account for. Most developers go for mobile first and completely ignore the rest, so you have loads of sites that are needlessly displayed like slow powerpoint presentations, autoscrolling to the next anchor because that’s “good UX” somehow.
Okay? I’m not sure what you’re arguing against. Some websites have bad UX, and that means the technology used to implement that bad UX is in itself bad?
Form validation with JS goes back decades and no one in their right minds relies entirely on frontend validation.
I didn’t say anyone should rely entirely on frontend validation.
It’s great because it can be immediate, but it’s easier to sidestep either by accident or on purpose. Since a lot of forms nowadays are “autogenerated” from their respective UI libraries, they come with a lot of unnecessary cruft.
Again, what exactly are you arguing for or against? You said “don’t use JavaScript when you don’t need it”. You don’t need frontend validation, it’s a nice to have, but it would be incredibly stupid to say “this form is way better without frontend validation”.
I sure hope that doesn’t need a “local server” of any sort to work. It’s one of the things that baffles me the most, javascript that only works with a npm server to connect to. I also hope it’s not bundled as an electron app, what’s the point of having an entire chrome browser bundled just to run a single page?
No, the single HTML file I’m talking about doesn’t require a server or Electron or anything besides a browser. What are you on about?
You either seem to be willfully misunderstanding me, or you’re projecting a bunch of random webdev grievances onto me. Why?
I don’t agree completely - there are a lot of things that are possible without JavaScript, which are improved either due to better UX or improved safety through JS.
Easy examples for better UX is anything to do with forms and multi-step processes. Getting validation errors while typing is massively better than getting them on submit, and it’s easy to store temporary edit states locally to prevent data re-entry. This especially goes for offline-first applications.
IMO more importantly, local JS is always preferable to server-side logic when possible, since it means your data never leaves your browser. Imagine a JSON formatter that processes data server-side - you can never be sure what they are doing with your data! Compared to that local JS is incredibly portable (every platform has a browser) and isn’t reliant on anything else. I build my utility apps both in the usual bundler way, and as single files - meaning I can offer my app as a single HTML file you can download and use however you want.
Of course the security benefits aren’t perfect - it’s always possible data is still sent somewhere. I really hope that one day we’ll get an API that allows a website to limit further network connections to specific URLs. This would give users of such applications real peace of mind.
Wiktionary: (software engineering) A version of a program that is nearly ready for release but may still have a few bugs; the status between beta version and release version.
Oxford: a version of a product, especially computer software, that is fully developed and nearly ready to be made available to the public. It comes after the beta version.
I couldn’t find more definitions from “big” dictionaries, but literally no definition I’ve seen agrees with you. I wonder why that is.
I am happy with my simple docker-compose setup - one root folder with one subfolder per project containing the compose file and any configuration mounted into the container. Traefik automatically exposes all services I want under a well-known URL using a single line in each compose file. Watchtower updates the containers.
This has been running stable for over two years with probably 2-3 reboots in between. If my current NUC ever breaks I’ll set it up again using Podman instead of Docker, but aside from that I couldn’t be happier!