I have a love/hate relationship with docker. On one side it’s convenient to have a single line start for your services. On the other side as a self-hoster it made some developers rely only on docker meaning that deploying the stack from source is just an undocumented mess.
Also following the log4j vulnerability I tend to prioritize building from source as some docker package were updated far later than the source code was.
I love Docker because it is the only sane method to selfhost shit with my Synology NAS, and I love my Synology NAS because it is the only Linux interaction that I have (from my old MacBook Pro).
The Dockerfile is essentially the instructions for deploying from scratch. Sure, they most likely only exist for one distro but adapting isn’t a huge chore.
You can also clone the repo and build the container yourself. If you want to update say, log4j, and then attempt to build it, that’s still entirely possible and easier than from scratch considering the build environment is consistent.
If I’m updating the source code already I might as well build my service from it, I really don’t see how building a docker container afterward makes it easier considering the update can also break compatibility with the docker environment.
Also adapting can be a pita when the package is built around a really specific environment. Like if I see that the dockerfile installs a MySQL database can I instead connect it to my PostgreSQL database or is it completely not compatible? That’s not really something the dockerfile would tell me.
I really don’t see how building a docker container afterward makes it easier
What it’s supposed to make easier is both sandboxing and reuse / deployment. For example, Docker + Traefik makes some tasks so incredibly easy and secure compared to running them on bare metal. Or if you need to spin up multiple instances, they can be created and destroyed in seconds. Without the container, this just isn’t feasible.
The dockerfile uses MySQL because it works. If you want to know if the core service works with PostgreSQL, that’s not really on the guy who wrote the dockerfile, that’s on the application maintainer. Read the docs, do some testing, create your own container using its own PostgreSQL or connecting to an external database if that suits your needs better.
Once again the flexibility of bind mounts means you could often drop that external database right on top of the one in the container. That’s the real beauty of Docker IMO, being able to slot the containers into your system seamlessly due to the mount system.
adapting can be a pita when the package is built around a really specific environment
That’s the great thing about Docker, it lets you bring that really specific environment anywhere and in an incredibly lightweight manner compared to the old days of heavyweight VMs. I’ve even got Docker containers running on a Raspberry Pi B+ that otherwise is so old that it would be nearly impossible to install the libraries required to run modern software.
Also I created this repo to create a reproducible sec environment for myself. I added other languages, but personally work mostly with python. It is basically resonating for handling all the boiler plate:
For packaging in docker I started to use nix2container project as it gives me a greater control over layers. So for example when I package my phyton app I typically use 3 layers:
python and it’s dependencies
my application dependencies
my application, which is very tiny compared to other two, so there is great reuse of the layers
The algorithm mentioned in the video also helps a lot with reuse, but the above is more optimized by frequency of how things typically change.
BTW: today I discovered this github.com/astro/microvm.nix I haven’t play with it yet, but in theory it would let me generate a microvm image (in similar fashion to generate a docker container) which would let me to run my app natively as a tiny VM on EC2 for example, and use only minimum necessary of a typical OS to run it.
At the very least I’d try to clean up that fuzzy condition on behavior to anticipate any bad or inconsistent data entry.
WHERE UPPER(TRIM(behavior)) = ‘NICE’
Depending on the possible values in behavior, adding a wildcard or two might be useful but would need to know more about that field to be certain. Personally I’d rather see if there was a methodology using code values or existing indicators instead of a string, but that’s often just wishful thinking.
Edit: Also, why dafuq we doing a select all? What is this, intro to compsci? List out the values you need, ya heathen ;)
Honest question, which ones wouldn’t it work with? Most add a semicolon to the end automatically or have libraries and interfaces saved me a million times?
I’m not sure how including a final semicolon can protect against an injection attack. In fact, the “Bobby Tables” attack specifically adds in a semicolon, to be able to start a new command. If inputs are sanitized, or much better, passed as parameters rather than string concatenated, you should be fine - nothing can be injected, regardless of the semicolon. If you concatenate untrusted strings straight into your query, an injection can be crafted to take advantage, with or without a semicolon.
You need semicolons if it is a script with multiple commands to separate them. It is not needed for a single statement, like you would use in most language libraries.
Right, you can make that kind of money when you have 40 years of Cobol behind you. But even for new entrants, $90k seems low. There had better be a premium for dealing with old bullshit, especially when you’re probably damaging your resume in the long run.
90k sounds pretty standard for inexperienced (although maybe not first job) devs in general for most markets. Throw in factors like experience or skills in low supply and that changes pretty fast.
I know that COBOL isn’t going away anytime soon, but most companies have seen the writing on the wall for a long time. Anywhere that COBOL can be replaced with something more modern, it’s already underway. Some places even have a surplus of COBOL devs because of it. But there are countless places where it can’t be replaced, at least not reasonably.
The only way a COBOL dev is making $90k after 5 years is if there are very specific fringe benefits that make them not want to move along, or they are extremely naive about the market.
Anywhere that COBOL can be replaced with something more modern, it’s already underw
Rewrites are extremely risky though, and some companies don’t want to risk it. That COBOL code probably has 40 years worth of bug fixes and patches for every possible edge/corner case. A rewrite essentially restarts everything from scratch.
Do you know of a decent sized company that successfully migrated away from COBOL? I’d be interested in reading a whitepaper about how they did it, if such a thing exists.
Considering it uses day then month, 24hr clock, and distance in km, I’m guessing the reason why it’s not “human readable in American” is because it’s intended to be “human readable for pretty much everybody else”
I once worked in a software shop where all release packages had the Unix epoch timestamp in the filename. Yes, these sorted brilliantly making it trivial to find the last one. But good luck finding a build from a specific date/time.
As a side note, the program is amazingly performant. For small numbers the results are instantaneous and for the large number close to the 2^32 limit the result is still returned in around 10 seconds.
For a long time I’ve been of the opinion that you should only ever optimize for the next sucker colleague who might need to read and edit your code. If you ever optimize for speed, it needs to be done with massive benchmarking / profiling support to ensure that the changes you make are worth it. This is especially true with modern compilers / interpreters that try to use clever techniques to optimize your code either on the fly, or before making the executable.
I do feel like it’s good, though, when libraries optimize. Ideally, they don’t have much else to do than one thing really well anyways.
And with how many libraries modern applications pull in, you do eventually notice whether you’re in the Python ecosystem, where most libraries don’t care, or in the Rust ecosystem, where many libraries definitely overdo it. Because well, they also kind of don’t overdo it, since as a user of the library, you don’t see any of it, except the culmulative performance benefits.
Libraries are also written and maintained by humans.
It’s fine to optimize if you can truly justify it, but that’s going to be even harder in libraries that are going to be used on multiple different architectures, etc.
I’m still mad he didn’t use the size of the number to tell the system which block to read first. I feel like that would be a great use of division or maybe modulus?
// increase the dynamically allocated memory space of a word sized integer stored at the memory address represented by the symbol “x” by the integer 1 and terminate the instruction
Probably would fall into scope of a compositer in Wayland, rather than the protocol. I suspect it originated with old CRT displays. Sometimes they can appear scan diagonally.
Even without that usecase, I think it’s great to have around in order to support novel displays and display-like devices.
That’s because the COBOL OGs are retired/ing and the industry has been training young people telling them “yeah, sorry, this is all we can pay you”. Here in Europe, they’ll take unemployed people from a different industry, put them on a training course, and bang! you’ve got a grateful new dev who doesn’t know how much they are worth.
You just gotta keep spreading the message. I keep happily sharing my salary, especially with younger, less experienced devs, so we can all win better.
For real. Even just talking to your fellow coding monkeys helps. It’s ironic that for example here in France, despite all our workers rights and revolutionary tradition, speaking about your salary is still a social faux-pas. And who benefits? Certainly not us.
I’d understanding actively pressuring someone to share their salary being a faux-pas. Admittedly, just sharing your own may make some people feel pressured to share theirs out of reciprocity, but just sharing your own salary generates nowhere near the same amount of pressure as outright telling someone “share your salary or you’re a bad person on the side of The Man!”
I hope the amount of people sharing their salary increases and talking about it becomes normalized.
Man I’d swim to Europe if some company wants to swoop me up and train me for something that valuable lol here in the States I have to not only pay for the training out the nose, but also find the time to do that while still working my regular job lol
programmer_humor
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.