[HN Gopher] What's New in SQLite 3.35
___________________________________________________________________
 
What's New in SQLite 3.35
 
Author : nalgeon
Score  : 257 points
Date   : 2021-03-12 20:19 UTC (2 hours ago)
 
web link (nalgeon.github.io)
w3m dump (nalgeon.github.io)
 
| deft wrote:
| This release looks amazing. I use sqlite in almost every one of
| my sideprojects because it works everywhere and I can just open a
| file to inspect my db state which is really useful when mashing
| and trying to decide on table formats etc. The column delete and
| RETURNING additions are godsends, someone else said it already
| but this makes replacing postgres a lot easier.
 
| 4ec0755f5522 wrote:
| Oh that "returning" is nice; in current SQLite I'm doing a second
| query to get that:                 SELECT last_insert_rowid()
 
  | pimeys wrote:
  | Now only MySQL is missing `RETURNING` from the relational
  | databases that still matter. Postgres had it for ages, SQL
  | Server has it with OUTPUT (with a caveat of breaking when using
  | triggers) and SQLite finally added them on this release.
  | 
  | Using `RETURNING` is so much nicer than spending four round-
  | trips to `BEGIN`, `INSERT`, `SELECT` and `COMMIT` otherwise...
 
  | edoceo wrote:
  | Yea, this one and column removal are my two favourites. And it
  | makes it easier to switch this and PG
 
  | masklinn wrote:
  | Especially since last_insert_rowid() can never be free of race
  | condition, and is basically useless whenever ON CONFLICT is
  | involved.
 
    | scottlamb wrote:
    | > Especially since last_insert_rowid() can never be free of
    | race condition
    | 
    | I think that's overstating it. It's per-connection, not per-
    | database. If you don't share connections between threads,
    | it's fine. If you keep the connection locked / checked out
    | for the span of the two statement, it's fine. You're probably
    | doing the latter anyway if you use transactions.
    | 
    | Returning looks nice, though! Particularly that it can return
    | "one result row for each database row that is deleted,
    | inserted, or updated" rather than just one.
 
      | masklinn wrote:
      | > Returning looks nice, though! Particularly that it can
      | return "one result row for each database row that is
      | deleted, inserted, or updated" rather than just one.
      | 
      | That... seems normal? Returning just one row would make no
      | sense, how would even a trivial `RETURNING id` work
      | otherwise?
 
        | scottlamb wrote:
        | Sure, but I'm comparing to last_insert_rowid(), which
        | can't do that.
        | 
        | Also, last_insert_rowid() (by definition) doesn't work
        | with "without rowid" tables, where returning can.
 
        | masklinn wrote:
        | Ah I see, I was a bit confused because I'm used to
        | RETURNING from pg and I've been waiting for sqlite to add
        | it for a very long time. Especially since ON CONFLICT was
        | added as it made the issue even worse.
 
  | bob1029 wrote:
  | This sounds incredible.
  | 
  | I am going to have to do some testing. Right now, we lock on a
  | single SQLiteConnection instance because of the fact that
  | insert & id retrieval are 2 separate invocations. If we can get
  | away with just a single ExecuteQueryAsync(MyUpdateSql,
  | MyObject) call and it's thread safe, then we could drop a ton
  | of locking abstractions at the application level.
 
    | masklinn wrote:
    | > If we can get away with just a single
    | ExecuteQueryAsync(MyUpdateSql, MyObject) call and it's
    | thread safe, then we could drop a ton of locking abstractions
    | at the application level.
    | 
    | Not only would you have to ensure you're always setting the
    | db to SQLITE_OPEN_FULLMUTEX,
    | https://sqlite.org/c3ref/errcode.html and
    | https://sqlite.org/c3ref/changes.html are still not coherent
    | in multithreaded contexts. Calling them won't corrupt
    | anything, but they'll return garbage. The latter probably
    | doesn't matter much, but the former seems quite relevant if
    | you want to handle errors beyond "this failed".
 
      | bob1029 wrote:
      | Correct - We open our databases in serialized mode and
      | access a single connection from multiple threads.
      | 
      | We actually are not concerned with handling detailed error
      | information. Our usage of these databases is very well
      | bounded. All of the SQL that will ever execute against them
      | is contained in 1 series of constant declarations, and the
      | mappers ensure request data is well-formed before
      | attempting inserts & updates. SQLite is really just there
      | to provide identity and persistence for our business
      | objects.
      | 
      | We are still able to use these fields effectively in local
      | development and unit testing, since in these contexts there
      | is only ever 1 thread hitting the database at a time.
 
  | fastball wrote:
  | I'm actually surprised it took SQLite this long to implement
  | RETURNING, such a useful SQL language feature.
 
| Felk wrote:
| > SQLite is called 'lite' for a reason. If you need functions,
| add them yourself.
| 
| Most people I know, myself included, pronounce it that way. But
| Richard Hipp, the creator of SQLite, actually pronounces it SQL-
| ite, like a mineral.
 
  | airstrike wrote:
  | > actually pronounces it SQL-ite, like a mineral.
  | 
  | Surprised you didn't say he pronounces it SQL-ite like an
  | animal
 
    | samatman wrote:
    | "Esqueuellite, like a mineral" is D. Richard Hipp's phrasing.
 
      | Felk wrote:
      | Yep, it's a direct quote I remembered from the Changelog
      | Podcast #201 "Why SQLite succeeded as a database" starting
      | at timestamp 15:58
      | 
      | https://changelog.com/podcast/201
 
  | [deleted]
 
  | u678u wrote:
  | It seems there is no universal agreement so you can pronounce
  | which way you want.
  | https://english.stackexchange.com/questions/431329/what-is-t...
 
| moonchild wrote:
| > To delete a column, SQLite have to completely overwrite the
| table - so the operation is not fast. But it's still nice.
| 
| Yet another reason why column stores are superior to row stores.
 
  | wayneftw wrote:
  | This is not a problem for "row stores" though, it's specific to
  | SQLite.
  | 
  | As a matter of fact, any classical RDBMS can effectively store
  | columns off-page and and most of them do just that in the case
  | of large TEXT/JSON/BLOB columns.
  | 
  | Column stores like Redshift, Snowflake, etc. are optimized for
  | a much narrower set of use cases. The more widely used
  | databases could do the same optimizations internally but that
  | would make them much less useful.
 
| rphln wrote:
| This is pretty neat. Now that they added `exp`, I'm tempted to
| implement an autojumper using [frecency] in SQL. I tried it once
| before, but gave up specifically because of this IIRC.
| 
| [frecency]:
| https://wiki.mozilla.org/User:Jesse/NewFrecency?title=User:J...
 
  | marvel_boy wrote:
  | What is an autojumper?
 
    | rphln wrote:
    | It's a shell command that allows you to `cd` to places
    | without having to type the whole path. Some implementations
    | are https://github.com/wting/autojump and
    | https://github.com/rupa/z.
    | 
    | A very simplified example: if you call `j pro`, the command
    | should take you to the folder that best matches the name
    | based on some criteria (in my case, one of them would be
    | frecency). In my case, that would be my projects folder in
    | another drive.
    | 
    | The nifty thing about them is that you don't have to make
    | those aliases manually -- the tool learns the directories
    | you've visited over time.
    | 
    | Of course, the tool could take you to the wrong path, but
    | with a well made set of matching/prioritization rules, I've
    | found it more useful than not.
 
| throw_m239339 wrote:
| > Column removal
| 
| Wow, finally. I Love SQlite. It saved my a* in so much projects
| where I had to implement an ETL, I just spawned some in memory
| SQLite database instead of writing some painful custom code in
| Language X or Z over and over again. Also used it to generate
| static search results in a flat file blogging platform since it
| does have some full text search capabilities. And let's not even
| talk about custom functions, including custom aggregation
| functions, in any client language, directly in your application
| code...
 
  | u678u wrote:
  | I'm curious about this as I always prefer files and collections
  | of structures or classes than having to deal with databases and
  | ORMs. What else can you do aside from joining and aggregation?
 
    | oblio wrote:
    | You don't need to use an ORM, just use a query builder.
 
  | dmarlow wrote:
  | > Probably the second most popular source of user suffering.
  | 
  | It was #1 for me! Finally indeed.
 
    | asveikau wrote:
    | I would have guessed desire to be the #1 source of user
    | suffering.
 
  | iagovar wrote:
  | > I had to implement an ETL
  | 
  | Buddy, google DuckDB, it's like OLAP SQLite. You'll thank me
  | later.
 
    | bondant wrote:
    | Does someone here use DuckDB in production? Is it as stable
    | as SQLite?
    | 
    | The hosting company I have to work with has a very old
    | version of SQLite installed on the server and they don't want
    | to update it. So I was looking at whether I could replace it
    | with DuckDB since it seems to be easy to install with pip.
 
      | banana_giraffe wrote:
      | Answering the question you didn't ask:
      | 
      | pysqlite3-binary is a Linux only package in PyPI that
      | includes a recent version of SQLite
      | 
      | apsw is a cross-platform package, it brings in a modern
      | version as well, and additionally exposes everything SQLite
      | can do (useful for me, you can write VFS drivers in python
      | for it). The version of apsw in pypi is hopelessly out of
      | date, the homepage has more details on how to install the
      | latest version.
 
        | bondant wrote:
        | Thanks for the recommendations, I will have a look at
        | pysqlite3-binary and apsw.
 
      | exyi wrote:
      | DuckDB is designed for query processing, not updating data
      | - I guess you would not want to use it for the transaction
      | processing workloads just because it's easier to install.
      | If you are doing mostly complex queries, then DuckDB is
      | great, but I think there is going to be little usage
      | directly in production, I'd expect it's more used for data
      | processing
 
        | snicker7 wrote:
        | And how great is SQLite for transactional workloads?
        | Every transaction locks up the entire db. There is no
        | page or even table level locking. DuckDB can't be worse
        | than that.
 
        | nalgeon wrote:
        | SQLite in WAL mode allows single concurrent writer and
        | unlimited readers. Which is completely fine for a wide
        | range of production use cases.
 
        | simonw wrote:
        | In practice, I've found that most SQLite write operations
        | take low-milliseconds to complete - so even under a very
        | heavy write load (assuming WAL mode) you're unlikely to
        | see any contention between writes.
        | 
        | For my own stuff I maintain an in-memory Python queue of
        | write operations and apply them in order against a single
        | write connection - while running numerous other read-only
        | connections to serve SELECTs.
 
        | infogulch wrote:
        | SQLite is not at the top of the charts, but there's a
        | looong ways to drop below it among the wide variety of
        | desirable properties that it offers.
 
        | bondant wrote:
        | Yes I'm doing very few update or insert, but I run a lot
        | of queries, so I guess it should be more than okay for
        | DuckDB.
 
  | rgacote wrote:
  | Possibly the first step towards the ability to rename a column?
 
    | benasher44 wrote:
    | Rename is already here! Came in 3.25.0
    | (https://www.sqlite.org/changes.html)
 
    | [deleted]
 
  | nalgeon wrote:
  | SQLite is really great at crunching data! I definitely prefer
  | it over pandas in most cases, as SQL is naturally fit to joins,
  | aggregates etc. Also SQLite works natively with JSON, which is
  | a huge time saver.
 
    | abhgh wrote:
    | I prefer it over pandas for joins etc too. My workflow is (1)
    | do the simple stuff in Python using pandas (2) for some of
    | the complex stuff, I just start creating sqlite tables. If
    | you have datasette installed, you can also view the tables
    | (choosing to write intermediate ones for greater
    | debuggability) pretty easily in your browser.
 
      | nalgeon wrote:
      | datasette is a wonderful piece of software! Simon Willison
      | has created a great tooling around SQLite. Can't imagine
      | how much time and energy he has invested in these projects.
 
        | mmcdermott wrote:
        | Honest question because I haven't messed with Datasette
        | much beyond skimming the home page - how does it improve
        | on a general SQL client like DataGrip or Squirrel or
        | DBeaver?
 
        | simonw wrote:
        | Obviously I'm biased, so I'd love to hear answers to this
        | from other people (plus I've not really used any of those
        | alternatives much).
        | 
        | Datasette is very "webby". Queries you execute end up in
        | your URL bar as ?sql= parameters, which means you can
        | navigate to them in your history, bookmark them, share
        | links with other people (if your Datasette is shared) and
        | open them in new tabs.
        | 
        | It also does web-style tricks like turning foreign key
        | references into hyperlinks through to the associated
        | records.
        | 
        | Datasette's table browsing feature has faceting, which is
        | enormously powerful. I don't know if those alternatives
        | have this feature or not, but I use this constantly. Demo
        | here (the owner, country_long and primary_fuel columns):
        | https://global-power-plants.datasettes.com/global-power-
        | plan...
        | 
        | Datasette's plugin system is pretty unique too. You can
        | install plugins like
        | https://datasette.io/plugins/datasette-cluster-map and
        | https://datasette.io/plugins/datasette-vega to add
        | visualizations, which again are bookmarkable and hence
        | easy to share with other people.
        | 
        | All of that said, I don't really see Datasette as
        | competing with existing SQL clients. It's intended more
        | as a tool for exploratory data analysis - I've put very
        | little work into running UPDATE/INSERT statements for
        | example, it's much more about turning a set of relational
        | data into something people can interactively explore.
 
    | tinus_hn wrote:
    | I get the feeling we've been trained to expect less, with
    | poor, incredibly slow legacy products. But really for many
    | use cases all the opensource relational databases give
    | instant results.
 
| ketralnis wrote:
| You can also read the release notes[0] which is pretty readable
| as always doesn't take the extra time to denigrate the work of
| this excellent and freely provided software
| 
| [0]: https://sqlite.org/releaselog/3_35_0.html
 
| samatman wrote:
| `RETURNING` will substantially clean up my code, and I already
| have one migration which could have just been a `DROP COLUMN`, so
| this is great news.
| 
| On the subject of "it's called 'lite' for a reason", my wishlist
| does include library functions for working with RFC3339
| timestamps. SQLite already ships with a fairly large suite of
| JSON tools, which are optional to compile into the library, so
| there's precedent.
| 
| Datetimes are of those things which is incredibly annoying to get
| right, and really belongs inside the database. RFC3339 timestamps
| are already well designed, since if you stick to UTC (and if you
| don't store timezone data separately you deserve those problems),
| lexical order is temporal order, but queries which would be
| rendered in English as "return all accounts where last payment is
| ninety days prior to `now`" isn't really possible with string
| comparisons.
| 
| Also, with the JSON library, you can use a check constraint to
| fail if a string isn't valid JSON, another affordance I would
| love to have for datetimes.
| 
| Grateful for what we just got, though! Just daydreaming...
 
  | nalgeon wrote:
  | SQLite has ISO-8601 compatible date functions, isn't that
  | enough?                 sqlite> select datetime('now', '-90
  | days');       2020-12-12 21:44:22
  | 
  | https://sqlite.org/lang_datefunc.html
 
| siscia wrote:
| I am running zeeSQL.com a Redis module that embeds SQLite into
| Redis to provide SQL and secondary indexes (search by value) on
| top of Redis.
| 
| This release it is the first one I am really excited about .
| 
| As soon as I can allocate few minutes I will update the SQLite
| code of the software.
| 
| The RETURNING makes a lot of queries and use cases an order of
| magnitude simpler.
 
| polyrand wrote:
| The `RETURNING` is so awesome! I'm implementing a set of data
| structures on top of SQLite, one of them is a queue[0], and I had
| to do a transaction to lock a message and then return it, but
| this makes it easier.
| 
| There's one little issue I keep finding with SQLite, and it's
| that most virtual servers / VM images ship with version 3.22.0,
| and upgrading often means building from source.
| 
| In any case, SQLite is absolutely wonderful. My favorite way of
| building products is having a folder for all the DBs that I mount
| to docker-compose. This release makes it even better.
| 
| [0] https://github.com/litements/litequeue
 
| einnjo wrote:
| I recently used SQLite in my side project [1]. Here's what I
| learned from using it:
| 
| * Shaving the overhead of network calls for queries sped up my
| site significantly.
| 
| * Most CI/CD providers include SQLite in their base linux images
| and setup in local envs is easy as well. Running tests against
| the actual database is simple.
| 
| * Replication is not available out of the box. To share a
| database with multiple instances of your app you will have to use
| a shared storage volume or some of the available solutions at
| [2][3][4], but they each come with their caveats.
| 
| [1] https://www.tendielist.com
| 
| [2] http://litereplica.io
| 
| [3] http://litesync.io
| 
| [4] https://bedrockdb.com
 
  | toddgruben wrote:
  | You can add this one to the list https://litestream.io/
 
  | tyingq wrote:
  | More than replication, but dqlite might be worth a look as
  | well. It is supposedly stable now that it was rewritten in C.
  | 
  | https://dqlite.io/
 
| dmarlow wrote:
| > To delete a column, SQLite have to completely overwrite the
| table - so the operation is not fast. But it's still nice.
| 
| Can someone with more knowledge/experience ELI5, please? Is this
| essentially how it's done in other db engines? TIA
 
  | awestroke wrote:
  | In postgresql:                 The DROP COLUMN form does not
  | physically remove the column, but simply makes it invisible to
  | SQL operations. Subsequent insert and update operations in the
  | table will store a null value for the column. Thus, dropping a
  | column is quick but it will not immediately reduce the on-disk
  | size of your table, as the space occupied by the dropped column
  | is not reclaimed. The space will be reclaimed over time as
  | existing rows are updated.
 
    | airstrike wrote:
    | > The space will be reclaimed over time as existing rows are
    | updated.
    | 
    | Or as you VACUUM, correct? I think it lets you specific a
    | column name too
 
      | masklinn wrote:
      | VACUUM just marks tuples as free spaces so they can be
      | reused. This is part of
      | 
      | > The space will be reclaimed over time as existing rows
      | are updated.
      | 
      | because of MVCC, updating a row really inserts a new row
      | and the old one eventually becomes free space (once a
      | vacuum comes around to marking it).
      | 
      | VACCUM FULL, however, will rewrite the entire table.
 
  | masklinn wrote:
  | postgresql does essentially nothing on a drop column, in part
  | because it doesn't use fixed-size tuples:
  | 
  | > The DROP COLUMN form does not physically remove the column,
  | but simply makes it invisible to SQL operations. Subsequent
  | insert and update operations in the table will store a null
  | value for the column. Thus, dropping a column is quick but it
  | will not immediately reduce the on-disk size of your table, as
  | the space occupied by the dropped column is not reclaimed. The
  | space will be reclaimed over time as existing rows are updated.
  | 
  | but if you VACUUM FULL (or CLUSTER) it will immediately rewrite
  | the entire table.
  | 
  | Also note that storing a null means forcing a null bitmap for
  | every row (even if it's not otherwise used).
 
  | derefr wrote:
  | In many engines, row-tuples are materialized from rows by
  | having the query planner turn the table's metadata into a
  | mapping function. With this approach, you get a bunch of things
  | "for free"--the ability to reorder columns, rename columns, add
  | new nullable all-NULL columns or default-constant all-default-
  | valued columns, all without doing any writing. Rows instead get
  | rewritten when the DB builds a new version of them for some
  | other reason (e.g. during UPDATE) or during some DB-specific
  | maintenance (e.g. during VACUUM, for Postgres.)
  | 
  | I don't believe SQLite works this way. It gives you literally
  | what's in the encoded row, decoded. I believe this allows it to
  | be either zero-copy or one-copy (not sure which), but it has
  | the trade off of disallowing these fancy kinds of read-time
  | mapping.
  | 
  | IMHO it's a trade off that makes sense, on both sides. Client-
  | server DBMS inherently need to eventually serialize the data
  | and send it over the wire, so fewer copies doesn't get you
  | much, while remapping columns at read time might get you a lot.
  | SQLite can hand pointers directly to the app it's embedded in,
  | so "direct" row reads are a great advantage, while--due to the
  | small size of most SQLite DBs--the need for eager table
  | rewrites on ALTER TABLE isn't even very expensive.
 
| deknos wrote:
| Do joins now work in every way (full, inner, outer)?
 
  | ketralnis wrote:
  | They always have, haven't they?
  | https://www.sqlite.org/lang_select.html
 
    | deknos wrote:
    | not according to https://www.sqlite.org/omitted.html
 
| eigenvalue wrote:
| I love SQLite. Some of the best, most reliable code out there
| made by a small handful of people who really care about
| excellence. I'm curious if anyone here as experience with DQlite
| (https://dqlite.io/ ) from Canonical. It's basically a
| distributed version of SQLite that combines it with the RAFT
| consensus protocol. It's sounds like a great idea and I'm
| considering it for a project I'm working on now, but I'd love to
| hear what people think of it in practice.
 
| IgorPartola wrote:
| I don't use SQLite as my main data store for any project that
| runs on a server because a dedicated DB server has advantages.
| But that doesn't mean that I don't use it for production use.
| Here is an example from a few years ago:
| 
| I needed to pull in a large dataset from CSV (several dozen
| gigabytes), do a bunch of transforms on it, extract a small
| subset of data that I actually was interested in, and load that
| subset into the main DB. I needed to do this in a way where it
| wouldn't bog down the main DB server (which stored less data than
| what was in the CSVs), and also in a way where I could
| potentially load diffs of the CSV files as updates. So the
| solution was that a dedicated job would run on a server where the
| SQLite database was used to load the CSVs and to manipulate the
| data with SQL because that was way more convenient than doing it
| with ad hoc objects in Python. Once done, extract rows from
| SQLite and load into Postgres. The local DB would stick around to
| use the diffs for incremental updates, but if it was lost it was
| easy to recreate it from the last full dataset. Not highly
| available but highly understandably, cheap, fast, and dead
| simple.
 
  | chrisweekly wrote:
  | Sounds like a job for lnav (https://lnav.org), an awesome
  | little CLI "mini-ETL" powertool with SQLite embedded.
 
  | pablobaz wrote:
  | https://duckdb.org/ is good for these type of tasks
 
| krat0sprakhar wrote:
| What's the best way to run a service that uses SQLite in a
| serverless / container environment (e.g. Cloud Run).
| 
| I'd love to use this for my personal projects and I'm not sure
| how to set this up in a container given their ephemeral nature.
 
  | segf4ult wrote:
  | If your app is in a Docker container, you need to have some way
  | to mount a persistent volume for the database. AWS Fargate and
  | fly.io both offer storage volumes.
 
  | elamje wrote:
  | You can check out Dokku, which is the open source, single
  | server Heroku equivalent. You can mount a storage volume to the
  | container and access it from your app running in Docker. Dokku
  | makes it stupid simple, so it might be worth reading the source
  | code to see how they do that.
 
    | hectormalot wrote:
    | I use Dokku, but it also has a Postgres plugin which is
    | basically just a few commands. That also gives you the
    | niceties of dokku pg:backup etc. If you're on Dokku anyway
    | then that's what I would recommend.
    | 
    | For me, SQLite remains the perfect file format for command
    | line tools.
 
| gigatexal wrote:
| with the returning clause it should be trivial to do cdc now, no?
 
  | nalgeon wrote:
  | Isn't it done with triggers? SQLite supports them since long
  | ago
 
  | gunnarmorling wrote:
  | Interesting thought; with the client/server database model,
  | RETURNING isn't relevant for CDC, which rather extracts changes
  | from the transaction log, not scoped to any particular client
  | session. But in the case of SQLite and its embedded way of
  | running, one could indeed envision to use it for implementing
  | some basic CDC functionality at the application level. It'd
  | still lack metadata though like transaction ids, log file
  | offset, etc.
 
  | paulryanrogers wrote:
  | CDC?
 
    | gigatexal wrote:
    | yup, change data capture.
 
    | dorfsmay wrote:
    | https://en.wikipedia.org/wiki/Change_data_capture
 
    | [deleted]
 
| swlkr wrote:
| This release changes everything!
| 
| My biggest complaints were no returning clause, and no drop
| column.
| 
| That's it, now I can run sqlite in production without hesitation!
 
| esturk wrote:
| Is there a site equivalent to "caniuse.com" for all the different
| database features by product and version? If not, someone should
| build one. Often some DB would say they added feature X but what
| does that mean? Did they implement this in advance of other DBs
| or behind others?
 
  | foobar33333 wrote:
  | There is no standard spec so the same function name doing the
  | same kind of thing will work in different ways or support
  | different features.
 
  | scrollaway wrote:
  | That's a really wonderful idea.
 
  | WrtCdEvrydy wrote:
  | The issue is there's no database feature spec so you end up
  | with each DB having it's own version of something.
 
    | dragonwriter wrote:
    | > The issue is there's no database feature spec
    | 
    | ISO/IEC 9075 disagrees.
 
      | WrtCdEvrydy wrote:
      | Yeah, that's the query language used in relational
      | databases but find my the spec for "database encryption" or
      | "database clustering"
 
  | nalgeon wrote:
  | Markus Winand is doing something like that at https://modern-
  | sql.com/. But it's not nearly as extensive as caniuse.com
 
| The_rationalist wrote:
| It's such a shame that webSQL died, I hope they will reconsider
| the decision in the next few years.
 
  | slaymaker1907 wrote:
  | I think an independent implementation would help. Personally, I
  | think SQLite is a unique project which is of such high quality
  | that independence is unimportant, but others do not see it that
  | way.
 
    | hajile wrote:
    | We now have three steaming piles of indexedDB garbage that
    | are still (last I checked) implemented on top of sqlite
    | anyway. I don't see how anyone could think the world became a
    | better place after the change.
 
  | hajile wrote:
  | Even if they don't, I at least hope they'll provide an
  | alternative. indexedDB is hardly usable let alone a replacement
  | for anything.
 
| p4bl0 wrote:
| I was already completely fan of SQLite, but some of these new
| features are the cherries on the cake =).
 
  | nalgeon wrote:
  | Yeah, like being able to remove a column! And math functions,
  | can't believe they finally implemented those tsu
 
___________________________________________________________________
(page generated 2021-03-12 23:00 UTC)