Are Long Running Transactions Bad?
This transaction has been open since July 2, 2001.
We call it “Edward.”
– http://thebuild.com/presentations/10-ways-to-kill-performance.pdf page 33
Some time ago I was asked about long running transactions. Is that really so bad and how long should a transaction run? I will try to reply that question now, as it returns all the time. The answer is not so trivial, as usually when you ask about databases.
It will be a very simple introduction to database transactions. I just assume that the transactions operate in the “normal” way. It means that the transactions can see only the consistent state of database from the moment of the start of the transaction. Further description of all types of transactions can be find in the excellent PostgreSQL documentation.
Database transactions are used for the all-or-nothing principle. You can
perform many different operations in a transaction. If you end the
transaction using the
COMMIT command, then all the
changes are made. If you use the
then none of the changes is made. It is impossible to load only a part
of the changes to the database.
For me the most important and nicest PostgreSQL transactions feature is that it has full transactional DDL. You can create an index in a transaction, run many queries and notice that the index is used. You could then rollback the transaction, and the index will just magically disappear. What’s more, any other transaction won’t ever see that index.
The same is for all other PostgreSQL operations. You could just create a bunch of tables in a transaction, if you have made an error somewhere, the transaction will be rolled back. You could just fix the error and run that once again and don’t think about the tables that could be created earlier.
Why Long Transactions are Bad?
Too Much Old Garbage Here
Normally transaction can see a snapshot of the data from the beginning of the transaction. If it can see the data, the data should exist in the database. It means that if there is a row with some data, you start two transactions, and one of them changes the row, the second transaction should still see the first version. This causes the need of storing multiple versions of the same row in the database.
In PostgreSQL there is the autovacuum procedure (you could easily run that also by hand) which clears unused data. This cleared unused space could be later used for new data, so the database could stay at the same size for a long time.
Imagine that you have a very long running transaction, let’s take just two weeks. That transaction started two weeks ago. What does that mean for the database? It means that this transaction should see the database state for two weeks ago. Autovacuum shouldn’t clear anything for the last two weeks, as it could be needed here. The database size is growing and growing, the indexes are bigger and bigger, and the overall database performance drops silently.
Transaction Number Wraparound
Each transaction in the database gets its unique number. The transaction number generator is cyclic and one day you could get the warning:
WARNING: some databases have not been vacuumed in over 2 billion transactions
DETAIL: You may have already suffered transaction-wraparound data loss.
Warnings are usually totally ignored… until the day comes. The day that suddenly database shuts down with the error, that the transaction-wraparound will occur right now, and that could destroy some data, so to prevent data loss, database will just shut down. After restart, everything will be great, it just works.
The only thing you’ve probably not noticed is that the long running (or rather most probably long hanging) transaction is not there any more, so vacuum could just clean old row versions. However that won’t decrease the database on disk size.
Transaction can hold a lock. When a transaction updates a row, it locks it. Another transaction that wants to update or delete the same row, will hung till the lock is released. Such automatic locks are released only at the end of the transaction (it doesn’t matter if the transaction commits or rolls back). A long transaction that locks something can damage the whole efficiency as all other transactions could wait for this lock to be released.
How Short a Transaction Should Be?
This cannot be simply answered. It depends. This is my favourite answer on most questions about databases. It just depends. Sometimes there is a really huge operation where the transaction lasts for a couple of days. That is OK, but only if it is needed.
If you have some normal operations on the database and normally transactions last for a couple of seconds or minutes, and suddenly such a normal transaction hangs for a week, you can be sure there was some error in the application, the transaction should be removed from database, and most probably you will have a really huge efficiency problem in the next couple of weeks.