Artwork

Innehåll tillhandahållet av Hussein Nasser. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Hussein Nasser eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

The cost rolling back transactions (postgres/mysql)

9:25
 
Dela
 

Manage episode 305108565 series 1954062
Innehåll tillhandahållet av Hussein Nasser. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Hussein Nasser eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
The cost of a long-running update transaction that eventually failed in Postgres (or any other database for that matter. In Postgres, any DML transaction touching a row creates a new version of that row. if the row is referenced in indexes, those need to be updated with the new tuple id as well. There are exceptions with optimization such as heap only tuples (HOT) where the index doesn’t need to be updated but that doesn’t always happens. If the transaction rolls back, then the new row versions created by this transaction (millions in my case) are now invalid and should NOT be read by any new transaction. You have two solutions to address this, do you clean all dead rows eagerly on transaction rollback? Or do you do it lazily as a post process? Postgres does the lazy approach, a command called vacuum which is called periodically Postgres attempts to remove those dead rows and free up space in the page. Whats the harm of leaving those dead rows in? Its not really correctness issues at all, in fact transactions know not to read those dead rows by checking the state of the transaction that created them. This is however expensive, the check to see of the transaction that created this row is committed or rolled-back. Also the fact that those dead rows live in disk pages with alive rows makes an IO not efficient as the database has to filter out dead rows. For example, a page may have contained 1000 rows, but only 1 live row and 999 dead rows, the database will make that IO but only will get a single row of it. Repeat that and you end up making more IOs. More IOs = slower performance. Other databases do the eager approach and won’t let you even start the database before rolling back is successfully complete, using undo logs. Which one is right and which one is wrong? Here is the fun part! Nothing is wrong or right, its all decisions that we engineers make. Its all fundamentals. Its up to you to understand and pick. Anything can work. You can make anything work if you know what you are dealing with. If you want to learn about the fundamentals of databases and demystify it check out my udemy course https://database.husseinnasser.com
  continue reading

512 episoder

Artwork
iconDela
 
Manage episode 305108565 series 1954062
Innehåll tillhandahållet av Hussein Nasser. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Hussein Nasser eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
The cost of a long-running update transaction that eventually failed in Postgres (or any other database for that matter. In Postgres, any DML transaction touching a row creates a new version of that row. if the row is referenced in indexes, those need to be updated with the new tuple id as well. There are exceptions with optimization such as heap only tuples (HOT) where the index doesn’t need to be updated but that doesn’t always happens. If the transaction rolls back, then the new row versions created by this transaction (millions in my case) are now invalid and should NOT be read by any new transaction. You have two solutions to address this, do you clean all dead rows eagerly on transaction rollback? Or do you do it lazily as a post process? Postgres does the lazy approach, a command called vacuum which is called periodically Postgres attempts to remove those dead rows and free up space in the page. Whats the harm of leaving those dead rows in? Its not really correctness issues at all, in fact transactions know not to read those dead rows by checking the state of the transaction that created them. This is however expensive, the check to see of the transaction that created this row is committed or rolled-back. Also the fact that those dead rows live in disk pages with alive rows makes an IO not efficient as the database has to filter out dead rows. For example, a page may have contained 1000 rows, but only 1 live row and 999 dead rows, the database will make that IO but only will get a single row of it. Repeat that and you end up making more IOs. More IOs = slower performance. Other databases do the eager approach and won’t let you even start the database before rolling back is successfully complete, using undo logs. Which one is right and which one is wrong? Here is the fun part! Nothing is wrong or right, its all decisions that we engineers make. Its all fundamentals. Its up to you to understand and pick. Anything can work. You can make anything work if you know what you are dealing with. If you want to learn about the fundamentals of databases and demystify it check out my udemy course https://database.husseinnasser.com
  continue reading

512 episoder

Alla avsnitt

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide