Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

copy_to_distributed_table in contrib package fails to mark inactive shards #235

Closed
citus-github-bot opened this issue Feb 4, 2016 · 4 comments
Assignees
Labels

Comments

@citus-github-bot
Copy link

Issue by metdos
Wednesday Mar 25, 2015 at 12:02 GMT
Originally opened as citusdata/pg_shard#95


If I shutdown(ctrl + c) copy_to_distributed_table while it is working, it fails to mark shards on closed node as inactive. If I wait copy_to_distributed_table to complete, then it marks shards inactive on closed node as properly.

Here are steps to replicate problem;

  1. Start a 2+1 pg_shard cluster.
  2. Create customer_reviews table as in pg_shard documentation and create shard placements.
  3. Shutdown one of the worker nodes.
  4. Start copy_to_distributed_table
    /usr/local/pgsql/bin/copy_to_distributed_table -CH -n NULL customer_reviews_1998.csv customer_reviews
  5. And shutdown (ctrl + c) it, before it completes.
  6. Check pgs_distribution_metadata.shard_placement and see none of shard placements are marked as inactive.
  7. Run queries…
    1. Run select count(*) from customer_reviews;
    2. Start closed worker node.
    3. Again run select count(*) from customer_reviews; and see results are different.
@citus-github-bot
Copy link
Author

Comment by onderkalaci
Thursday Apr 02, 2015 at 12:07 GMT


Hey @jasonmp85,

I examined the bug thoroughly. It seems the problem (or a very close problem) may lead to some other bugs too.

First, let me illuminate the problem. Basically, our approach does the following:

  1. Create a temporary table.
  2. Add a trigger to the temporary table.
  3. On each INSERT to the temporary table, via the trigger, INSERT to the distributed table.
  4. Execute COPY to the temporary table. The COPY command triggers the INSERT to the distributed table.

Now, how this bug happens as follows:

  1. Stop one of the worker nodes.
  2. During COPY command, the stopped worker nodes is marked as STATE_INACTIVE. Thus, it does not try to INSERT to the shard placements that are on the stopped worker node.
  3. Concurrent queries read the metadata's previous version, in which all shard placement states are STATE_FINALIZED. (I guess, it is due to MVCC )
  4. If COPY fails, the transaction that COPY is executed inside is rolled backed.
  5. The shard placements that are marked as STATE_INACTIVE marked as STATE_FINALIZED again.
  6. Thus, we end up shard placements which are divergent, but, all of which has finalized states.

This problem can also be observed in different forms. Such as the following:

  1. Stop one of the worker nodes.
  2. During COPY command, the stopped worker nodes is marked as STATE_INACTIVE.
  3. Concurrent queries read the metadata's previous version, in which all shard placement states are STATE_FINALIZED. (I guess, it is due to MVCC ). Thus, concurrent queries can read from STATE_INACTIVE shard placements.

How can we approach to the solution?

I tried to find some ways to handle this bug. What do you think about them? Which one should I follow? Or can you suggest any other way?

  1. Using COPY makes things difficult to handle. Mainly it is because we are on a single transaction on the master node, but, we execute too many transactions on the worker nodes. Rolling back all the INSERTs is not simple. So, one approach could be to change COPY to consecutive INSERTs, which in turn may decrease the overall performance. This may require a lot of code change too. (Handling COPY parameter etc.)
  2. Change copy_to_distributed_table, use some kind of DO BLOCK or pl/pgsql functions instead of this part. Then, write a code to iterate over the file. On each iteration, copy a single line from the file and save it to a temporary file. Lastly, COPY from that temporary file. This method incurs high overhead, but, at least we do not lose the gain of using COPY or we don't need to change too many lines of codes. We need to consider newline character to parse the file, which can be done easily.
  3. I don't know whether is it doable or not, but, if we can disable rollback for the transaction that is started by COPY command, we can bypass this problem.
  4. Disable interrupts during execution of the script. Well, we still prone to the bug if the server is closed or so. Also, from usability, it is also not a good approach. If it takes too much time to load the data (which is realistic), and user wants to stop it, he wouldn't be able to do. This is easiest to implement. If we are going to implement first-class copy soon, we can consider this and inform users about this current bug.

@citus-github-bot
Copy link
Author

Comment by onderkalaci
Monday Apr 06, 2015 at 11:26 GMT


Hey @jasonmp85,
Changing the value of ON_ERROR_ROLLBACK has no effect on our case. I read this post which explains the effects of the parameter in detail. Despite what is written on the post, I tested with changing the value of it, but as I expected the results didn't change.

To summarize what I observed:

  1. First, it is not a Postgres feature, and there is no way you can instruct Postgres itself to ignore errors inside of a transaction or disable ROLLBACK.
  2. ON_ERROR_ROLLBACK is designed for error handling during execution consecutive commands inside a transaction. However, we only have a single transaction associated with \COPY command.
  3. I thought of changing copy_to_insert that it starts a transaction. By that way, the metadata is updated inside that transaction and quitting the script does not lead to this bug. However, as explained here, it is not possible to start a transaction inside a PL/pgSQL function.
  4. Also, I thought of executing the triggers as separate transactions with the same motivation above. However, again it is not possible as explained here.

So, it seems that we cannot solve this problem with playin ON_ERROR_ROLLBACK.

@ozgune
Copy link
Contributor

ozgune commented May 3, 2016

We implemented native COPY support with #33 and #225. Once we release v5.1, we plan to deprecate support for copy_to_distributed_table.

@onderkalaci / @metdos, now that we have native COPY support for hash partitioned tables, could we close this issue?

@metdos
Copy link
Contributor

metdos commented May 3, 2016

We added a depreciation note with #478.

@metdos metdos closed this as completed May 3, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants