Every time I hear this from one of my devs under me I get a little more angry. Such a meaningless statement, what are you gonna do, hand your pc to the fucking customer?
It's not actually meaningless. It means "I did test this and it did work under certain conditions." So maybe if you can determine what conditions are different on the customer's machine that'll give you a clue as to what happened.
The most obscure bug that I ever created ended up being something that would work just fine on any machine that had at any point had Visual Studio 2013 installed on it, even if it had since had it uninstalled (it left behind the library that my code change had introduced a hidden dependency on). It would only fail on a machine that had never had Visual Studio 2013 installed. This was quite a few years back so the computers we had throughout the company mostly had had 2013 installed at some point, only brand new ones that hadn't been used for much would crash when it happened to touch my code. That was a fun one to figure out and the list of "works on this machine" vs. "doesn't work on that machine" was useful.
That's not been my experience. It'll tend to be agreeable when I suggest architecture changes, or if I insist on some particular suboptimal design element, but if I tell it "this bit here isn't working" when it clearly isn't the real problem I've had it disagree with me and tell me what it thinks the bug is really caused by.
Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it's more useful than the first LLMs.
I have a love/hate relationship with docker. On one side it’s convenient to have a single line start for your services. On the other side as a self-hoster it made some developers rely only on docker meaning that deploying the stack from source is just an undocumented mess.
Also following the log4j vulnerability I tend to prioritize building from source as some docker package were updated far later than the source code was.
I love Docker because it is the only sane method to selfhost shit with my Synology NAS, and I love my Synology NAS because it is the only Linux interaction that I have (from my old MacBook Pro).
The Dockerfile is essentially the instructions for deploying from scratch. Sure, they most likely only exist for one distro but adapting isn’t a huge chore.
You can also clone the repo and build the container yourself. If you want to update say, log4j, and then attempt to build it, that’s still entirely possible and easier than from scratch considering the build environment is consistent.
If I’m updating the source code already I might as well build my service from it, I really don’t see how building a docker container afterward makes it easier considering the update can also break compatibility with the docker environment.
Also adapting can be a pita when the package is built around a really specific environment. Like if I see that the dockerfile installs a MySQL database can I instead connect it to my PostgreSQL database or is it completely not compatible? That’s not really something the dockerfile would tell me.
I really don’t see how building a docker container afterward makes it easier
What it’s supposed to make easier is both sandboxing and reuse / deployment. For example, Docker + Traefik makes some tasks so incredibly easy and secure compared to running them on bare metal. Or if you need to spin up multiple instances, they can be created and destroyed in seconds. Without the container, this just isn’t feasible.
The dockerfile uses MySQL because it works. If you want to know if the core service works with PostgreSQL, that’s not really on the guy who wrote the dockerfile, that’s on the application maintainer. Read the docs, do some testing, create your own container using its own PostgreSQL or connecting to an external database if that suits your needs better.
Once again the flexibility of bind mounts means you could often drop that external database right on top of the one in the container. That’s the real beauty of Docker IMO, being able to slot the containers into your system seamlessly due to the mount system.
adapting can be a pita when the package is built around a really specific environment
That’s the great thing about Docker, it lets you bring that really specific environment anywhere and in an incredibly lightweight manner compared to the old days of heavyweight VMs. I’ve even got Docker containers running on a Raspberry Pi B+ that otherwise is so old that it would be nearly impossible to install the libraries required to run modern software.
Also I created this repo to create a reproducible sec environment for myself. I added other languages, but personally work mostly with python. It is basically resonating for handling all the boiler plate:
For packaging in docker I started to use nix2container project as it gives me a greater control over layers. So for example when I package my phyton app I typically use 3 layers:
python and it’s dependencies
my application dependencies
my application, which is very tiny compared to other two, so there is great reuse of the layers
The algorithm mentioned in the video also helps a lot with reuse, but the above is more optimized by frequency of how things typically change.
BTW: today I discovered this github.com/astro/microvm.nix I haven’t play with it yet, but in theory it would let me generate a microvm image (in similar fashion to generate a docker container) which would let me to run my app natively as a tiny VM on EC2 for example, and use only minimum necessary of a typical OS to run it.
Toner’s role is being underplayed by the video. She’s potentially calling Altman out, for underrating the dangers of AI.
At least Altman is lying about something - about how much OpenAI is going towards AGI in the short term. The above might’ve bought the bullshit fully, while Sutskever knows that it’s bullshit.
I’m not sure if the board is also lying or not.
The boiling point was likely OpenAI potentially receiving some cash grant from some scummy party, that would be in a moral grey area considering the "non-"profit goals of the company.
Everybody will get a bit more of free popcorn for a while. 🍿 This mess is far from over.
The stalebot is most times useless. The only scenario where I can see use of it is a maintainer waiting for the reporter to add information. But closing issues because no maintainer checked on them? That’s garbage and discourages bug reports.
After a extremely long week, I sometimes participate in open source. I have to deal with malicious commits. I have to follow up on issues from misguided individuals who are actually looking for tech support. I have to guide new contributors to how this massive repo works and to submit tests. I have to negotiate with the core team and these convos can often last months/years.
And contributing to open-source is one of the few things that give me pleasure, even if it’s a extremely thankless job.
But I’m tired man.
I’m not dealing with low-quality memers who are providing zero value. Nor should we encourage it.
I do FOSS as well, but I’d rather people have fun punting the stalebot than just keep repeating “this issue still exists”. I will probably get a chuckle out of it.
Facebook for all its nastiness was very much incompetent in influencing the direction of the web. Look at their failed attempts like free basics.
Google on the other hand has the web tightly in its dirty grip. At this point, they aren’t even pretending to be nice. Even those plans that cause them reputational damage are brought back in some other name.
The only way to stop Google is for the regulatory agencies to put their foot down hard. They should be divided into at least a couple dozen companies that are not allowed to do business with each other.
I actually see a legitimate use case for it and helped add the actions version in a project where I'm a collaborator.
Quite a bit, certain bugs disappear after an update without us targeting it (partially because the logs get fudged a bit after going through dependencies, so sometimes multiple bugs have the same cause or it's actually a dependency issue that got fixed) and sometimes we forget about old feature requests.
The stale reminder doubles as a reminder for us to (re)consider working on the issue. When we know something probably isn't gonna get fixed suddenly, we apply a label to the issue. For enhancements that we'll definitely work on soon™, we apply help wanted. We've configured the action to ignore both. We also patrol notifications from stale to see if something shouldn't go stale. This is a medium-sized project so we can handle patrolling and IMO this helps us quite a bit.
Fair enough; I didn’t consider artifacts like logs and traces. I suppose a stale marker might prompt the original reporter to retest and supply fresh ones (or confirm it’s fixed in the dependency case).
In an ideal world I suppose we’d have automated tests for all bug reports but that’s obviously never going to happen!
I’ve seen this same thing happen with Python’s type hints. Turns out giving an “escape hatch” type for devs who have no clue what the type actually is leads to a lot of useless type hints.
Yeah, it’s especially bad, when a library doesn’t provide type hints itself. It can be comically difficult to find out what the return type of a function is, because every if-else-branch might have a different return value, so you may need to read the function body in full to figure out what the type might be.
Add to that, that lots of the tooling around type hints isn’t as fleshed out / useful as it is in fully typed languages and I can definitely understand why someone might not immediately feel like it’s a valuable use of their time.
programmer_humor
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.