Release 352 (9 Feb 2021)#
General#
Add support for
WINDOW
clause. (#651)Allow prepared statement parameters for
SHOW STATS
. (#6582)Update tzdata version to 2020d. As a result, queries can no longer reference the
US/Pacific-New
zone, as it has been removed. (#6660)Add
plan-with-table-node-partitioning
feature config that corresponds to existingplan_with_table_node_partitioning
session property. (#6811)Improve performance of queries using
rank()
window function. (#6333)Improve performance of
sum()
andavg()
fordecimal
types. (#6951)Improve join performance. (#5981)
Improve query planning time for queries using range predicates or large
IN
lists. (#6544)Fix window and streaming aggregation semantics regarding peer rows. Now peer rows are grouped using
IS NOT DISTINCT FROM
instead of the=
operator. (#6472)Fix query failure when using an element of
array(timestamp(p))
in a complex expression forp
greater than 6. (#6350)Fix failure when using geospatial functions in a join clause and
spatial_partitioning_table_name
is set. (#6587)Fix
CREATE TABLE AS
failure when source table has hidden columns. (#6835)
Security#
Allow configuring HTTP client used for OAuth2 authentication. (#6600)
Add token polling client API for OAuth2 authentication. (#6625)
Support JWK with certificate chain for OAuth2 authorization. (#6428)
Add scopes to OAuth2 configuration. (#6580)
Optionally verify JWT audience (
aud
) field for OAuth2 authentication. (#6501)Guard against replay attacks in OAuth2 by using
nonce
cookie whenopenid
scope is requested. (#6580)
JDBC driver#
Docker image#
Remove support for configuration directory
/usr/lib/trino/etc
. The configuration should be provided in/etc/trino
. (#6497)
CLI#
Support user impersonation when using password-based authentication using the
--session-user
command line option. (#6567)
BigQuery connector#
Hive connector#
Add
UPDATE
support for ACID tables. (#5861)Match columns by index rather than by name by default for ORC ACID tables. (#6479)
Match columns by name rather than by index by default for Parquet files. This can be changed using
hive.parquet.use-column-names
configuration property andparquet_use_column_names
session property. (#6479)Remove the
hive.partition-use-column-names
configuration property and thepartition_use_column_names
session property. This is now determined automatically. (#6479)Support timestamps with microsecond or nanosecond precision (as configured with
hive.timestamp-precision
property) nested withinarray
,map
orstruct
data types. (#5195)Support reading from table in Sequencefile format that uses LZO compression. (#6452)
Expose AWS HTTP Client stats via JMX. (#6503)
Allow specifying S3 KMS Key ID used for client side encryption via security mapping config and extra credentials. (#6802)
Fix writing incorrect
timestamp
values withinrow
,array
ormap
when using Parquet file format. (#6760)Fix possible S3 connection leak on query failure. (#6849)
Iceberg connector#
Add
iceberg.max-partitions-per-writer
config property to allow configuring the limit on partitions per writer. (#6650)Optimize cardinality-insensitive aggregations (
max()
,min()
,distinct()
,approx_distinct()
) over identity partition columns withoptimizer.optimize-metadata-queries
config property oroptimize_metadata_queries
session property. (#5199)Provide
use_file_size_from_metadata
catalog session property andiceberg.use-file-size-from-metadata
config property to fix query failures on tables with wrong file sizes stored in the metadata. (#6369)Fix the mapping of nested fields between table metadata and ORC file metadata. This enables evolution of
row
typed columns for Iceberg tables stored in ORC. (#6520)
Kinesis connector#
Support GZIP message compression. (#6442)
MySQL connector#
Improve performance for certain complex queries involving aggregation and predicates (e.g.
HAVING
clause) by pushing the aggregation and predicates computation into the remote database. (#6667)Improve performance for certain queries using
stddev_pop
,stddev_samp
,var_pop
,var_samp
aggregation functions by pushing the aggregation and predicates computation into the remote database. (#6673)
PostgreSQL connector#
Improve performance for certain complex queries involving aggregation and predicates (e.g.
HAVING
clause) by pushing the aggregation and predicates computation into the remote database. (#6667)Improve performance for certain queries using
stddev_pop
,stddev_samp
,var_pop
,var_samp
,covar_pop
,covar_samp
,corr
,regr_intercept
,regr_slope
aggregation functions by pushing the aggregation and predicates computation into the remote database. (#6731)
Redshift connector#
Use the Redshift JDBC driver to access Redshift. As a result,
connection-url
in catalog configuration files needs to be updated fromjdbc:postgresql:...
tojdbc:redshift:...
. (#6465)
SQL Server connector#
Avoid query failures due to transaction deadlocks in SQL Server by using transaction snapshot isolation. (#6274)
Honor precision of SQL Server’s
datetime2
type . (#6654)Add support for Trino
timestamp
type inCREATE TABLE
statement, by mapping it to SQL Server’sdatetime2
type. Previously, it was incorrectly mapped to SQL Server’stimestamp
type. (#6654)Add support for the
time
type. (#6654)Improve performance for certain complex queries involving aggregation and predicates (e.g.
HAVING
clause) by pushing the aggregation and predicates computation into the remote database. (#6667)Fix failure when querying tables having indexes and constraints. (#6464)
SPI#
Add support for join pushdown via the
ConnectorMetadata.applyJoin()
method. (#6752)