Skip to content

October 2021 Release

Compare
Choose a tag to compare
@zachmu zachmu released this 20 Oct 18:08
· 6792 commits to main since this release
400af43

This is a normally scheduled quarterly release of the library with many improvements and feature additions.

APIs are not guaranteed to settle until 1.0.

Merged PRs

go-mysql-server

  • 594: sql/plan: Fix join iterators to always Close() their secondary RowIter. Fix Subquery to Dispose its subquery.
  • 593: Make some update queries determinate
  • 591: Introduced a ViewProvider extension
    As part of this, got rid of the IndexRegistry and ViewRegistry on the context, put them on the Session instead. These changes significantly simplify the process of constructing an engine and running queries.
  • 590: Add dolt discord to readme
  • 588: Made it possible to use variables in AS OF expressions
    As part of this, also pulled resolving variables out of the resolve_columns step into its own rule.
  • 583: Update INNER JOIN Alpha
  • 582: sql/plan: exchange.go: Make Exchange's RowIter wait for all goroutines to shutdown cleanly.
    If we do not block on shutting down the goroutines that are handling the
    partitions, we cannot guarantee that we will not race with later uses of the
    sql.Session, for example.
    This converts the implementation to x/sync/errgroup and restructures things
    quite a bit.
  • 581: Skip fk validation on CREATE TABLE when fks are disabled
  • 579: Fixed panic on using an alias of a subquery expression in a where clause (now just an error)
  • 578: enginetest: Add BrokenQueries tests for projections in group by nodes not handling subquery expressions well.
  • 577: sql/analyzer: validation_rules: Disable validateSubqueryColumns, which was enabled in the latest releases but is still broken in some cases.
  • 576: Add Read Only transaction functionality
  • 575: Fixed bug in indexed joins
    When a query had two copies of the same table, and it could use an index to join them, it was non-deterministic which of the tables' (identical) indexes would be used. This choice doesn't matter for some implementations or even most queries, but in Dolt, if a query involves the same table at two different revisions (because of AS OF), it was arbitrary which table's index got returned. Because the dolt index implementation gets its data from its parent table, if it chose the wrong index it got the wrong data.
    This change restricts the choice of index to a single table name.
    Need tests demonstrating the bug, but those will have to live in Dolt for the time being.
  • 573: Add several features that unblock mysql workbench
    This pr implements
    1. Show STATUS
    2. Set CHARACTER SET
  • 572: Add the EXISTS Operator for Select Where Filters
    Adds the exists operator as described here: https://dev.mysql.com/doc/refman/8.0/en/exists-and-not-exists-subqueries.html
  • 568: sql/analyzer: indexed_joins: Keep searching for usable indexes when there are NOT(? = ?) and NOT(? <=> ?) clauses in a join conjunction.
    The indexed join logic is perfectly happy to go forward with a join plan if it
    doesn't find a perfect index, or even a usable index at all, for one or more
    table factors in the join. This logic is probably left over from when that was
    not the case, but for now we make it a little more liberal to cover some cases
    we need to cover for a customer.
  • 567: sql/plan,analyzer: Fix HashLookup for cases where there is a schema prefix not visible to the direct join parent.
    This adds a TransformUpCtx function that passes along a SchemaPrefix in the TransformContext if the schema prefix is scrutable from resolved children at that point in the analysis. apply_hash_lookup makes use of this to make the transformed expressions in the HashLookup lookup node refer to the right place, even when JoinNode.Left().Schema() doesn't have the whole prefix.
    This schema prefix is probably useful in other places and I am exploring rationalizing certain places where the analyzer makes use of the schema by using it or something like it.
    In the mean time, converted some of the more obscure transform variants (UpWithParent, UpWithSelector) to use TransformUpCtx as well. Held off on moving TransformUp to TransformUpCtx.
    Very open to suggestions on names for TransformUpCtx.
  • 566: sql/analyzer: Make SubqueryAlias nodes always cacheable.
  • 565: sql: Removing ctx parameter from Expression.WithChildren.
    Also removes it from FunctionFn types, TransformExpression... functions, and Aggregation.NewBuffer.
    We think this ctx parameter might have been added a few months ago as part of some optimization work which never made it across the line. Instead of threading a *sql.Context everywhere, if we have need of a *sql.Context during analysis or to precompute or materialize a certain result in the future, I think I'm going to advocate for a specific optional interface that the analyzer is aware of. It could then pass the context through at a specific analyzer phase and the function/expression node would have the opportunity to get ready to do what it needs to.
  • 564: sql/plan: Defer returning error from IndexedInSubqueryFilter.RowIter until the Next() call.
    Fixes some interactions between INSERT IGNORE INTO and expressions which will
    fail on evaluation.
    This might be a credible strategy everywhere we .Eval within RowIter, for
    example, in indexed_table_access.
  • 563: Added an extension point for custom function providers
    As part of this, embarked on a major refactor:
    • Extracted interfaces for sql.Catalog and sql.ProcessList
    • Moved existing sql.Catalog to analyzer package
    • Moved ProcessList and MemoryManager out of Catalog
    • Changed Analyzer and Engine to take a DatabaseProvider instead of a Catalog
  • 560: Update some analyzer rules and expression behavior to deal with tuples in a more principled way.
    Subquery expression nodes can now return tuples.
    InSubquery expression nodes can work with tuples as expected.
    An analyzer validation step now returns operand errors in more cases, expecting
    almost all expressions to return one column, but special casing certain
    operators and functions which support tuples.
    Added some TODO tests for cases where our tuple comparisons are still not
    behaving as we want them to. In particular, null safe vs. non-null safe
    comparisons and type coercion of tuple subtypes in comparisons still need work.
  • 559: sql/analyzer: aliases.go: Make sure we add the right table alias when traversing a DecoratedNode.
  • 557: sql/analyzer/optimization_rules.go: moveJoinConditionsToFilter: Fix small inaccuracy where computed topJoin could be wrong.
    This could result in the optimization pass adding the same Filter node to
    multiple places in the join tree.
  • 556: sql/plan/indexed_table_access.go: Change static index lookup nodes to not keep returning their unused key expressions.
    These needs take key expressions, but they do not evaluate them. They are not
    part of the evaluation tree, but they cause some problems with things like
    FixFieldIndexes and prune_columns, where all GetField expressions in the plan
    tree are expected to resolve to fields that are actually in scope and
    resolvable at the node that is being evaluated.
    This fixes a particular evaluation bug where a subquery expression in a join
    condition gets moved to the filter above the joins. If the moved subquery
    expression made use of a static index table lookup, the moved expression would
    fail to rewrite its field indexes appropriately and the query would return
    incorrect results.
  • 555: Fix handling of secure_file_priv for LOAD DATA
  • 554: Added locks, todo to mutable session state
  • 553: Wrote proper number conversion logic
    Previously we were relying on the cast library, which had a few issues with how we expect SQL numbers to function. One such issue was how number strings were handled (such as interpreting "01000" as a binary number rather than in decimal). There have been more issues in the past that have gone undocumented, but this should fix them (or allow for an easy fix since it's all custom logic now).
  • 552: Fixed actual default collation representative
  • 551: Check Error when building Session
  • 549: Mutable DatabaseProvider refactor
  • 548: Fix Resolve Defaults
  • 547: Expose CREATE DATABASE to integrators
    Remove AddDatabase functionality from Catalog and DatabaseProvider. Must have a DatabaseProvider with a static set of Databases to construct a Catalog.
    New Databases can be created with CreateDatabase, which is now an integrator function.
  • 544: sql/plan: exchange_test.go: Sometimes these tests fail under -race because they are racey. Fix that.
  • 540: sql/expression/function/aggregation: Change aggregation functions to work better with group by expressions.
    The existing code seems to be expecting rows to arrive in order of the group by expression. But the analyzer does not arrange for that to happen.
    Instead, this PR changes the approach so that each aggregation function duplicates its Child expression(s) as part of its per-aggregation state. That way it can Eval against its per-group-by Child and get the correct results out of Distinct for example.
    This PR updates AVG, FIRST, LAST, MAX, MIN and SUM to do this. COUNT(DISTINCT ...) is handled by a special expression node instead, and nothing has been changed in Count or CountDistinct. group_concat also seems to handle DISTINCT itself, and so I have not changed anything there. Json aggregation did not look immediately amenable to combining with DISTINCT, because the Update functions seemed to error when the child expression returned nil, so I have not changed them either.
  • 538: sql/datetimetype.go: Handle parsing dates like 2021-8-3 correctly.
  • 537: server/handler.go: Have ComPrepare correctly return null result schema metadata for write statements.
    This fixes our server's interaction with clients which expect the result
    metadata for a write statement to be NULL. One such client is RMariaDB. See:
    dolthub/dolt#2084.
  • 536: Zachmu/logger
  • 535: Added function to parse column type strings to SQL types
  • 534: server/server: Support configuring TLSConfig in the vitess Listener.
  • 532: Implement the CONVERT_TZ() SQL function
  • 530: add engine tests for dateparse
    Follow-up to #523
  • 529: Vinai/load file
  • 528: Hacky version of read committed isolation level, which begins a new t…
    …ransaction on every statement. Basically it's read committed, but without the ability to turn off auto commit
  • 523: implement built-in function str_to_date
    Closes #518
    This PR implements the STR_TO_DATE MySQL function. In places where the spec is ambiguous, I'm attempting to match the behavior of MySQL version 8 from my manual testing. I need to implement a few more parsers and add more test cases to cover the expected behavior.
    go test  -cover github.com/dolthub/go-mysql-server/sql/parse/dateparse
    ok    github.com/dolthub/go-mysql-server/sql/parse/dateparse  0.163s  coverage: 89.3% of statements
    
    cc @zachmu
  • 519: Fix explicit DEFAULT value in insert query
    It seems that go-mysql-server is not compatible with insert query like:
    INSERT INTO `users` (`id`, `deleted`) VALUES (1, DEFAULT)
    Whereas DEFAULT is just to specify deleted columns should use column default value explicitly.
    The issue could be demostrated using below code:
    package main
    import (
    dbsql "database/sql"
    sqle "github.com/dolthub/go-mysql-server"
    "github.com/dolthub/go-mysql-server/auth"
    "github.com/dolthub/go-mysql-server/memory"
    "github.com/dolthub/go-mysql-server/server"
    _ "github.com/go-sql-driver/mysql"
    )
    func main() {
    db, _ := dbsql.Open("mysql", "root:@tcp(127.0.0.1:3307)/test")
    defer db.Close()
    query := `CREATE TABLE users (
    id bigint(20) unsigned NOT NULL AUTO_INCREMENT,
    deleted tinyint(1) unsigned NOT NULL DEFAULT '0',
    PRIMARY KEY (id)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4`
    stmt, _ := db.Prepare(query)
    stmt.Exec()
    _, err := db.Exec("INSERT INTO `users` (`id`, `deleted`) VALUES (1, DEFAULT)")
    if err != nil {
    panic(err.Error())
    }
    stmtOut, _ := db.Prepare("SELECT `deleted` FROM `users` WHERE `id` = 1")
    defer stmtOut.Close()
    deleted := true
    err = stmtOut.QueryRow().Scan(&deleted)
    if err != nil {
    panic(err.Error())
    }
    if deleted == true {
    panic("Wrong deleted value")
    }
    }
    var engine *sqle.Engine
    func init() {
    engine = sqle.NewDefault()
    db := memory.NewDatabase("test")
    engine.AddDatabase(db)
    config := server.Config{
    Protocol: "tcp",
    Address:  "localhost:3307",
    Auth:     auth.NewNativeSingle("root", "", auth.AllPermissions),
    }
    s, _ := server.NewDefaultServer(config, engine)
    go s.Start()
    }
    `db.Exec("INSERT INTO `users` (`id`, `deleted`) VALUES (1, DEFAULT)")` will result in error:
    Error 1105: plan is not resolved because of node '*plan.Values'
    
    This pull request should avoid such issue by turning explicit DEFAULT in insert query to implicit.
  • 516: Vinai/add drop pks
  • 512: Changed FILTER expressions to consider ENUM/SET types
    Previously a statement such as UPDATE test SET col = 2 WHERE col = 1; would fail for ENUM and SET types. For the FILTER node, we only consider the values for comparison, but the aforementioned types can match 'horse' with the number 50, so we have to consider the column type.
    Now this PR only affects ENUM and SET, although it's probably correct that all types should be used, with direct literal comparisons being used only when no other type can be derived from a column or variable. However, either MySQL has context-aware comparisons (my guess), or our comparison logic is wrong for other types, so I'm leaving those alone for now.
  • 510: Accept limit / offset as BindVar
    This should fix #509
  • 508: adds the time.Time utc format
    For some reason time.Time.UTC does not have a standard format defined in the time.go package. This adds the functionality here.
    cc. https://stackoverflow.com/questions/33119748/convert-time-time-to-string
    Fixes: dolthub/dolt#1903
  • 507: Delete From support for Empty Table
    Addressed bug found in dolthub/dolt#1923
  • 505: Vinai/ai noop
  • 503: Fix json number conversion
  • 502: From unixtime
    Extension of #490
  • 501: Fixed bug in alias resolution for aliases defined in group by and used in order by
    This fixes #499
  • 497: Add transactions test
    Adds tests for transactions and auto increment values.
  • 493: Fix Children() function of IndexedTableAccess
    Fix #478
  • 488: Modify ReadOnlyDatabaseHarness
  • 487: Andy/read only database
  • 486: Andy/database provider
    Adds a DatabaseProvider abstraction, allowing integrators to add their own logic to database resolution
  • 485: sql/plan: {update,delete}.go: Correctly return an error for unsupported UPDATEs and DELETEs against joins and subqueries.
  • 484: Add CLIENT_FOUND_ROWS functionality to INSERT on DUP.
    Also makes the capabilities flag a part of the client
  • 483: added JSON_OBJECT() sql function
  • 482: Vinai/fix bit type parsing
    Fixes parsing for bit type
  • 475: Fix JSON string comparison.
    Fix when a JSON is compared with a string, and apply
    unquote as MySQL does.
    Fixes #474
  • 473: sql/analyzer: indexed_joins: Use indexes in more cases where join condition includes AND ... IS [NOT] NULL.
  • 472: Fixed a bug involving aliases in filter pushdown, and made partial pushdown of filter predicates possible
  • 471: sql/analyzer: Add support for using indexes in joins where ON condition includes top level OR clauses.
    This adds support for concatenating the results from two more index lookups against a subordinate table in order to find the rows matching a join condition structured as indexed_col = ... OR different_indexed_col = ....
  • 470: More accurate validation of GROUP BY expressions
    This pr resolves many ErrValidationGroupBy arising from SQLLogicTests
  • 469: add ctx to WithFilters
  • 468: Parse empty bytes
    Fixes: dolthub/dolt#1818
  • 467: Add FIRST_VALUE
  • 465: cleanup locks parsing
  • 464: sql/analyzer: finalize_unions: Fix bug where we avoided finalizing union query plans.
  • 461: json array contains single element, and conversion fixes
    fixes json_contains issue where checking a single element in an array fails, and conversion from json to string.
  • 459: Add transaction status flag to handler
  • 458: extract_json quotation fix
    uses fork of oliveagle/jsonpath to fix issues with quotation marks in json paths. This will need to be revisited to fix some mysql incompatibilities.
  • 457: Datetime fixes
  • 456: Fixed MEDIUMINT over the server not returning negative values
    Super simple fix. We should make some Script tests that work over the server, but we can do that at some later point.
  • 455: thread context through to function creation
  • 454: sql/analyzer/aliases.go: getTableAliases: Appropriately handle scope in recursive calls.
  • 453: Fix functionality for CLIENT_FOUND_ROWS
  • 450: Vinai/wire protocol updates
  • 449: Vinai/create table select
    This PR Support CREATE TABLE SELECT AS syntax as well an interface for copying tables.
    1. It does not support more sophisticated queries such as CREATE TABLE mytable(pk int) AS SELECT.... That is tables with their own schemas are not interpreted due to parsing limitations.
  • 448: initial comparison and like calcs based on collation
    Currently the LIKE operator always does a case sensitive match. This should be based on the collation. This puts initial support in for doing per collation comparison and like calculations. Currently the comparison changes are disabled as the codebase uses the compare function for more than just WHERE clause calculations which sometimes need to be case sensitive.
  • 447: sql/analyzer: Fix index search for certain expressions which were not using table aliases appropriately.
  • 445: Allow limits and offsets to use value args (? in prepared statements)
    This fixes #439
  • 442: Vinai/temporary tables

vitess

  • 86: Add transaction characteristic to begin transaction
  • 85: can use status w/o quotes for insert statement
    re: dolthub/dolt#2195
  • 84: support for value and status keywords
    needed two features for mlflow:
    • column named value (alter table t modify value float(53) not null)
    • check named status (alter table a drop check status)
  • 83: Max/drop pk
  • 82: Max/status
    Status can be used as a constrain name and referenced from a constraint expression.
  • 81: Max/pk constraint
    sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1105, "syntax error at position 205 near 'PRIMARY'")
    [SQL:
    CREATE TABLE experiments (
    experiment_id INTEGER NOT NULL AUTO_INCREMENT,
    name VARCHAR(256) NOT NULL,
    artifact_location VARCHAR(256),
    lifecycle_stage VARCHAR(32),
    CONSTRAINT experiment_pk PRIMARY KEY (experiment_id),
    CONSTRAINT experiments_lifecycle_stage CHECK (lifecycle_stage IN ('active', 'deleted')),
    UNIQUE (name)
    )
    
  • 80: Vinai/lock tables
  • 79: Add transaction flag
  • 78: Add the found rows connection flag to the protocol
  • 77: Vinai/create table select
  • 76: Disallow non integer values for limit and offset, allow value args

Closed Issues

  • 586: memory/provider.go doesn't use a pointer receiver causing the struct to be copied* 570: Add support for read-only transactions
  • 499: "ORDER BY" expressions behaviour compatible with MySQL?
  • 569: Outdated example
  • 531: implement auto increment ddl support
  • 518: Implement built-in STR_TO_DATE
  • 480: any performance benchmark for dolt?
  • 517: can someone pls provide the instructions on how to port rocksdb as storage backend?
  • 509: Bug where using parameter for limit in prepare statement
  • 477: Cannot convert JSON number to number
  • 478: Cannot transform IndexedTableAccess nodes
  • 172: Support for prepared statements
  • 474: JSON string comparison seems broken (change in behavior from v0.6.0 and v0.10.0)
  • 251: LastInsertId always returns 0
  • 173: Better error message on syntax errors
  • 443: Support Insert opeartion for sqlx.
  • 439: Error: Unsupported feature: LIMIT with non-integer literal