Release 422 (13 Jul 2023)#
General#
Security#
BigQuery connector#
Add support for writing to columns with a
timestamp(p) with time zone
type. (#17793)
Delta Lake connector#
Add support for renaming columns. (#15821)
Improve performance of reading from tables with a large number of checkpoints. (#17405)
Disallow using the
vacuum
procedure when the max writer version is above 5. (#18095)
Hive connector#
Add support for reading the
timestamp with local time zone
Hive type. (#1240)Add a native Avro file format writer. This can be disabled with the
avro.native-writer.enabled
configuration property or theavro_native_writer_enabled
session property. (#18064)Fix query failure when the
hive.recursive-directories
configuration property is set to true and partition names contain non-alphanumeric characters. (#18167)Fix incorrect results when reading text and
RCTEXT
files with a value that contains the character that separates fields. (#18215)Fix incorrect results when reading concatenated
GZIP
compressed text files. (#18223)Fix incorrect results when reading large text and sequence files with a single header row. (#18255)
Fix incorrect reporting of bytes read for compressed text files. (#1828)
Iceberg connector#
Add support for adding nested fields with an
ADD COLUMN
statement. (#16248)Add support for the
register_table
procedure to register Hadoop tables. (#16363)Change the default file format to Parquet. The
iceberg.file-format
catalog configuration property can be used to specify a different default file format. (#18170)Improve performance of reading
row
types from Parquet files. (#17387)Fix failure when writing to tables sorted on
UUID
orTIME
types. (#18136)
Kudu connector#
Add support for table comments when creating tables. (#17945)
Redshift connector#
Prevent returning incorrect results by throwing an error when encountering unsupported types. Previously, the query would fall back to the legacy type mapping. (#18209)