Business/application logic can be 80-90% of an app’s code, and all of it can be reused across platforms. The actual UI rendering is just a small part of it.
In the UI code, some of it does have to differ across platforms but it’s mostly the lower level components like buttons, text fields, etc. Some product UI code built on top of those abstractions can be reused across platforms.
I don’t understand why desktop JS apps don’t use React Native at least. It’s still JavaScript but doesn’t use a browser, and renders to native UI widgets. Far lighter than Electron.
I moved from Australia to the USA since salaries for developers are so much higher here. I live in Silicon Valley which helps too. If you’re a senior developer (say 5+ years of experience) then a lot of the large companies here pay $200-300k/year salary plus $100-200k/year in company stock plus a bonus that’s 10-20% of salary if you get a good performance review.
Anywhere that COBOL can be replaced with something more modern, it’s already underw
Rewrites are extremely risky though, and some companies don’t want to risk it. That COBOL code probably has 40 years worth of bug fixes and patches for every possible edge/corner case. A rewrite essentially restarts everything from scratch.
Do you know of a decent sized company that successfully migrated away from COBOL? I’d be interested in reading a whitepaper about how they did it, if such a thing exists.
There is a disproportionately large number of furries working as network admins though. Whenever you use the internet, there’s a good chance that your data is transiting via a network administered by furries.
I wish people wouldn’t downvote comments like this. The downvote button isn’t an “I disagree with you” button and downvoting people just because you disagree with their opinion is silly.
We wouldn’t have Safari (Webkit) or Chrome (Blink) today if it weren’t for Konqueror and KHTML! Webkit is a fork of KHTML, and Blink is a fork of Webkit.
It can really slow things down if your views start calling other views in since they’re not actually tables
They can be in some cases! There’s a type of view called an “indexed” or “materialized” view where the view data is stored on disk like a regular table. It’s automatically recomputed whenever the source tables change. Doesn’t work well for tables that are very frequently updated, though.
Having said that, if you’re doing a lot of data aggregation (especially if it’s a sproc that runs daily), you’d probably want to set up a separate OLAP database so that large analytical queries don’t slow down transactional queries. With open-source technologies, this is usually using Hive and Presto or Spark combined with Apache Airflow.
Also, if you have data that’s usually aggregated by column, then a column-based database like Clickhouse is usually way faster than a regular row-based database. These store data per-column rather than per-row, so aggregating one column across millions or even billions of rows (eg average page load time for all hits ever recorded) is fast.
Like iSCSI, it exposes a disk image file, or a raw partition if you’d like (by using something like /dev/sda3 or /dev/mapper/foo as the file name). Unlike iSCSI, it’s a fairly basic protocol (the API is literally only 9 commands). iSCSI is essentially just regular SCSI over the network.
NFS and SMB have to deal with file locks, multiple readers and writers concurrently accessing the same file, permissions, etc. That can add a little bit of overhead. With iSCSI and NBD, it assumes only one client is using the file (because it’s impossible for two clients to use the same disk image at the same time - it’ll get corrupted) and it’s just reading and writing raw data.