This is cool. I've been thinking about this for a while. The best way to write high quality web applications is to use compiled languages and minimize the complexity of the infrastructure by using e.g. SQLite instead of PostgreSQL.
I hope this trend starts to pick up more steam and become the sane default that everyone just assumes. Instead of the current mess where everyone assumes that it's "normal" to live in this messy world of countless abstractions and frameworks and micro servers, etc.
I don't see how this would reduce complexity. If you don't need to scale, postgre still better enforces correctness and uses almost no resources. When you do need to scale, you WILL need to switch to postgre. Postgre is also dead simple to install and use for the "simple personal blog" use-cases. The enourmous technical debt you're taking on to reduce the one time installation by 2-3 commands and a slightly quicker download is not worth it.
SQLite is ok if you don't have much traffic (for example personal blogs), but you can't replace a db like Postgresql with SQLite if you have lot of concurrent traffic.
Supposing that your server-side tech uses multiple workers to render pages, I suppose you have to serialize the access to the SQLite file. What is the best way to do this?
SQLite can handle concurrent reads, but needs to lock the entire DB when writing. Multiple workers shouldn't be a problem per se, but if you need multiple workers to support the traffic, then maybe a client-server db like postgres is a better choice.
I hope this trend starts to pick up more steam and become the sane default that everyone just assumes. Instead of the current mess where everyone assumes that it's "normal" to live in this messy world of countless abstractions and frameworks and micro servers, etc.