Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support refreshing Iceberg tables #5707

Open
wants to merge 52 commits into
base: main
Choose a base branch
from

Conversation

lbooker42
Copy link
Contributor

@lbooker42 lbooker42 commented Jul 2, 2024

Add two methods of refreshing tables:

  • Manual refreshing - user specifies which snapshot to load and the engine will parse the snapshot to add/remove Iceberg data files as needed and notify downstream tables of the changes
  • Auto refreshing - at regular intervals (user configurable) the engine will query Iceberg for the latest snapshot and then parse and load

Example code:

Java automatic and manually refreshing tables

import io.deephaven.iceberg.util.*;
import org.apache.iceberg.catalog.*;

adapter = IcebergToolsS3.createS3Rest(
        "minio-iceberg",
        "http://rest:8181",
        "s3a://warehouse/wh",
        "us-east-1",
        "admin",
        "password",
        "http://minio:9000");

//////////////////////////////////////////////////////////////////////

import io.deephaven.extensions.s3.*;

s3_instructions = S3Instructions.builder()
    .regionName("us-east-1")
    .credentials(Credentials.basic("admin", "password"))
    .endpointOverride("http://minio:9000")
    .build()

import io.deephaven.iceberg.util.IcebergUpdateMode;

// Automatic refreshing every 1 second 
iceberg_instructions = IcebergInstructions.builder()
    .dataInstructions(s3_instructions)
    .updateMode(IcebergUpdateMode.autoRefreshing(1_000L))
    .build()

// Automatic refreshing (default 60 seconds)
iceberg_instructions = IcebergInstructions.builder()
    .dataInstructions(s3_instructions)
    .updateMode(IcebergUpdateMode.AUTO_REFRESHING)
    .build()

// Load the table and monitor changes
sales_multi = adapter.readTable(
        "sales.sales_multi",
        iceberg_instructions)

//////////////////////////////////////////////////////////////////////

// Manual refreshing
iceberg_instructions = IcebergInstructions.builder()
    .dataInstructions(s3_instructions)
    .updateMode(IcebergUpdateMode.MANUAL_REFRESHING)
    .build()

// Load a table with a specific snapshot
sales_multi = adapter.readTable(
        "sales.sales_multi",
        5120804857276751995,
        iceberg_instructions)

// Update the table to a specific snapshot
sales_multi.update(848129305390678414)

// Update to the latest snapshot
sales_multi.update()

Python automatic and manually refreshing tables

from deephaven.experimental import s3, iceberg

local_adapter = iceberg.adapter_s3_rest(
        name="minio-iceberg",
        catalog_uri="http://rest:8181",
        warehouse_location="s3a://warehouse/wh",
        region_name="us-east-1",
        access_key_id="admin",
        secret_access_key="password",
        end_point_override="http://minio:9000");

#################################################

s3_instructions = s3.S3Instructions(
        region_name="us-east-1",
        access_key_id="admin",
        secret_access_key="password",
        endpoint_override="http://minio:9000"
        )

# Auto-refresh every 1000 ms
iceberg_instructions = iceberg.IcebergInstructions(
        data_instructions=s3_instructions,
        update_mode=iceberg.IcebergUpdateMode.auto_refreshing(1000))

sales_multi = local_adapter.read_table(table_identifier="sales.sales_multi", instructions=iceberg_instructions)

#################################################

# Manual refresh the table
iceberg_instructions = iceberg.IcebergInstructions(
        data_instructions=s3_instructions,
        update_mode=iceberg.IcebergUpdateMode.MANUAL_REFRESHING)

sales_multi = local_adapter.read_table(
    table_identifier="sales.sales_multi",
    snapshot_id=5120804857276751995,
    instructions=iceberg_instructions)

sales_multi.update(848129305390678414)
sales_multi.update(3019545135163225470)
sales_multi.update()

@lbooker42 lbooker42 added this to the 0.36.0 milestone Jul 2, 2024
@lbooker42 lbooker42 self-assigned this Jul 2, 2024
@lbooker42 lbooker42 requested a review from rcaudy July 2, 2024 15:47
/**
* Notify the listener of a {@link TableLocationKey} encountered while initiating or maintaining the location
* subscription. This should occur at most once per location, but the order of delivery is <i>not</i>
* guaranteed.
*
* @param tableLocationKey The new table location key
*/
void handleTableLocationKey(@NotNull ImmutableTableLocationKey tableLocationKey);
void handleTableLocationKeyAdded(@NotNull ImmutableTableLocationKey tableLocationKey);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good change, may be breaking for DHE, please consult Andy pre-merge.

void beginTransaction();

void endTransaction();

/**
* Notify the listener of a {@link TableLocationKey} encountered while initiating or maintaining the location
* subscription. This should occur at most once per location, but the order of delivery is <i>not</i>
Copy link
Member

@rcaudy rcaudy Jul 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider whether we can have add + remove + add. What about remove + add in the same pull?
Should document that this may change the "at most once per location" guarantee, and define semantics.
I think it should be something like:
We allow re-add of a removed TLK. Downstream consumers should process these in an order that respects delivery and transactionality.

Within one transaction, expect at most one of "remove" or "add" for a given TLK.
Within one transaction, we can allow remove followed by add, but not add followed by remove. This dictates that we deliver pending removes before pending adds in processPending.
That is, one transaction allows:

  1. Replace a TLK (remove followed by add)
  2. Remove a TLK (remove)
  3. Add a TLK (add)
    Double add, double remove, or add followed by remove is right out.

Processing an addition to a transaction.

  1. Remove: If there's an existing accumulated remove, error. Else, if there's an existing accumulated add, error. Else, accumulate the remove.
  2. Add: If there's an existing accumulated add, error. Else, accumulate the add.

Across multiple transactions delivered as a batch, ensure that the right end-state is achieved.

  1. Add + remove collapses pairwise to no-op
  2. Remove + add (assuming prior add) should be processed in order. We might very well choose to not allow re-add at this time, I don't expect Iceberg to do this. If we do allow it, we need to be conscious that the removed location's region(s) need(s) to be used for previous data, while the added one needs to be used for current data.
  3. Multiple adds or removes within without their opposite intervening is an error.

null token should be handled exactly the same as a single-element transaction.

Processing a transaction:

  1. Process removes first. If there's an add pending, then delete, swallow the remove. Else, if there's a remove pending, error. Else, store the remove as pending.
  2. Process adds. If there's an add pending, error. Else, store the add as pending.

Note: removal support means that RegionedColumnSources may no longer be immutable! We need to be sure that we are aware of whether a particular TLP might remove data, and ensure that in those cases the RCS is not marked immutable. REVISED: ONLY REPLACE IS AN ISSUE FOR IMMUTABILITY, AS LONG AS WE DON'T RESUSE SLOTS.

We discussed that TLPs should probably specify whether they are guaranteeing that they will never remove TLKs, and whether their TLs will never remove or modify rows. I think if and when we encounter data sources that require modify support, we should probably just use SourcePartitionedTable instead of PartitionAwareSourceTable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I need to handle the RCS immutability question in this PR since Iceberg will not modify rows.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing a region makes the values in the corresponding row key range disappear. That's OK for immutability.
If you allow a new region to use the same slot, or allow the old region to reincarnate in the same slot potentially with different data, you are violating immutability.

Not reusing slots means that a long-lived iceberg table may eventually exhaust its row key space.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace (remove + add of a TLK) requires some kind of versioning of the TL, in a way that the TLK is aware of in order to ensure that we provide the table with the right TL for the version. AbstractTableLocationProvider's location caching layer is not currently sufficient for atomically replacing TLs.

Copy link
Member

@rcaudy rcaudy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to get slightly more complicated; I was wrong, the TableLocation is not sufficient for reference counting. We instead need to register "ownership interest" by TableLocationKey.

We should introduce ReferenceCountedImmutableTableLocationKey, and use that as the type delivered to TableLocationProvider.Listeners.
AbstractTableLocationProvider should bifurcate its state internally into:

  1. "live" set of TLKs (RCITLKs). Live set is set of keys shown to static consumers and new listeners.
  2. "available" map of TLK -> Object (which may be the TL, or the TLK). Available map allows TLs to be accessed. Keys are superset of live set.

Incements:

  1. Ref count on RCITLK to be held by ATLP as long as the TLK is in the “live” set;
  2. Ref count bumped before delivery to any Listener, once per listener.

Decrements:

  1. Listeners responsible to decrement if OBE, for example add followed by remove in a subscription buffer.
  2. SourceTable responsible to decrement if filtered out.
  3. RCSM responsible to decrement upon processing remove at end of cycle, or in its own destroy().

Notes:

  1. ATLP must unwrap TLK before giving to makeTableLocation
  2. RCITLK.onReferenceCountAtZero removes the TLK (and any TL that exists) from the available map. If the TL existed, sends a null update, and clears column locations.
  3. It's not bad that the RCITLK hard refs the ATLP; we already ref it from the SourceTable.

Enterprise note:
RemoteTableDataService is a non-issue, since it’s only used with Deephaven format. Meaning, we don’t need to extend this across the wire. Andy may have to deal with that, or we may have to evolve the API, in some future use case.

Missing feature:
TLP needs to advertise it’s update model.
This might be:
Single-partition, add-only -> append-only table -> partition removal bad
Multi-partition, add-only -> add-only table -> partition removal bad
Multi-partition, partition removes possible -> no shifts or mods (still immutable) -> partition removal OK
Not exposing any other models at this time (e.g. partitions that can have mods, shifts, removes; if we want that, use partitioned tables).

final Collection<ImmutableTableLocationKey> immutableTableLocationKeys = foundLocationKeys.stream()
.map(LiveSupplier::get)
.collect(Collectors.toList());

// TODO (https://github.com/deephaven/deephaven-core/issues/867): Refactor around a ticking partition table
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check and see if we can just close this ticket, and maybe delete the todo.

Copy link
Member

@rcaudy rcaudy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finished table data infrastructure. May need to look further at Iceberg code and tests.

Copy link
Member

@rcaudy rcaudy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor commentary. We can dig into the rest of the Iceberg layer tomorrow.

final TableLocationProvider.Listener outputListener = getWrapped();
if (outputListener != null) {
outputListener.handleTableLocationKeysUpdate(addedKeys, removedKeys);
// Produce filtered lists of added and removed keys.
final Collection<LiveSupplier<ImmutableTableLocationKey>> filteredAddedKeys = addedKeys.stream()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to be able to generate these collections only once, rather than once per listener. Doing so would require something of a redesign; we'd probably need to work more like a SubscriptionAggregator. Let's comment (without using the word "todo" ;-)) on that optimization, but defer it for now.

* @return the most permissive mode encountered in the stream
*/
public static TableUpdateMode mostPermissiveMode(Stream<TableUpdateMode> modes) {
// Analyze the location update modes of the input providers to determine the location update mode
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You said this was inappropriate.

* @param modes a stream of modes
* @return the most permissive mode encountered in the stream
*/
public static TableUpdateMode mostPermissiveMode(Stream<TableUpdateMode> modes) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As-is, this could be simplified to sorting. I guess it could become more complicated if we introduce capabilities that don't "nest" in the future.

/**
* Records the guarantees that the table offers regarding addition and removal of data elements. This can apply to
* {@link SourceTable source table} location additions and removals or row additions and removals within a location.
*/
public enum TableUpdateMode {
STATIC, APPEND_ONLY, ADD_ONLY, ADD_REMOVE;

/**
* Returns true if the addition is allowed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Returns true if the addition is allowed.
* Returns true if addition is allowed.

@@ -22,6 +29,9 @@ public boolean addAllowed() {
}
}

/**
* Returns true if the removal is allowed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Returns true if the removal is allowed.
* Returns true if removal is allowed.

Copy link
Member

@rcaudy rcaudy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Partial review of Iceberg-specific code.

import org.apache.iceberg.Snapshot;
import org.jetbrains.annotations.NotNull;

public interface IcebergTable extends Table {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we might want to note that these methods are optional, and in practice only supported when the implementation requested supports manual updating.

* @param tableDefinition The {@link TableDefinition} describing the table schema
* @param description A human-readable description for this table
* @param componentFactory A component factory for creating column source managers
* @param locationProvider A {@link io.deephaven.engine.table.impl.locations.TableLocationProvider}, for use in
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a lie now.

Comment on lines +24 to +26
public static Builder builder() {
return ImmutableIcebergUpdateMode.builder();
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think builder() and the Builder interface should be private at this time.


@Value.Immutable
@BuildableStyle
public abstract class IcebergUpdateMode {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we might want to consider whether initial snapshot should be a parameter.

}

@Value.Default
public long autoRefreshMs() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This field should be documented. We should also have a @Check to ensure it's only set with compatible modes.

}
}
}
@Deprecated(forRemoval = true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm OK with deleting these, unless there's a strong consensus otherwise.

*/
public Snapshot currentSnapshot() {
final List<Snapshot> snapshots = listSnapshots();
if (snapshots.isEmpty()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You pointed out that this should probably not be an error.

* <td>The snapshot identifier (can be used for updating the table or loading a specific snapshot)</td>
* </tr>
* <tr>
* <td>TimestampMs</td>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is inaccurate.

Comment on lines +326 to +331
// Find the snapshot with the given snapshot id
final Snapshot tableSnapshot =
snapshot(tableSnapshotId).orElseThrow(() -> new IllegalArgumentException(
"Snapshot with id " + tableSnapshotId + " not found for table " + tableIdentifier));

return table(tableSnapshot, null);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please apply this kind of change anywhere we can avoid duplication.

Suggested change
// Find the snapshot with the given snapshot id
final Snapshot tableSnapshot =
snapshot(tableSnapshotId).orElseThrow(() -> new IllegalArgumentException(
"Snapshot with id " + tableSnapshotId + " not found for table " + tableIdentifier));
return table(tableSnapshot, null);
return table(tableSnapshotId, null);

/**
* Read a snapshot of an Iceberg table from the Iceberg catalog.
*
* @param tableSnapshot The snapshot id to load
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @param tableSnapshot The snapshot id to load
* @param tableSnapshot The snapshot to load

@Nullable final IcebergInstructions instructions) {

// Do we want the latest or a specific snapshot?
final Snapshot snapshot = tableSnapshot != null ? tableSnapshot : table.currentSnapshot();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're supposed to get the current snapshot, we should refresh() first.

Comment on lines +125 to +129
if (!initialized) {
refreshSnapshot();
activationSuccessful(this);
initialized = true;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

activationSuccessful(this) needs to be unconditional, and happen at the end of the method.

The remainder looks like a job for an implementation of doInitialization(), and a call to ensureInitialized().

* Refresh the table location provider with the latest snapshot from the catalog. This method will identify new
* locations and removed locations.
*/
private void refreshSnapshot() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not clear what the behavior should be if there's no current snapshot. Empty? Get one? I think requiring the user to call update() or specify and initial snapshot is reasonable, but then refreshSnapshot() with no snapshot should be a no-op.


@Override
protected @NotNull ColumnLocation makeColumnLocation(@NotNull String name) {
return null;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this throw a UOE, or return a dummy?

Comment on lines +126 to +129
if (null == ProcessEnvironment.tryGet()) {
ProcessEnvironment.basicServerInitialization(Configuration.getInstance(),
"AbstractTableLocationProviderTest", new StreamLoggerImpl());
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should come out.

Comment on lines +134 to +137
@Override
public void tearDown() throws Exception {
super.tearDown();
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
@Override
public void tearDown() throws Exception {
super.tearDown();
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants