ALTER TABLE

Modifies the properties, columns, or constraints for an existing table.

See also:

ALTER TABLE … ALTER COLUMN , CREATE TABLE , DROP TABLE , SHOW TABLES , DESCRIBE TABLE

Syntax

ALTER TABLE [ IF EXISTS ]  RENAME TO 

ALTER TABLE [ IF EXISTS ]  SWAP WITH 

ALTER TABLE [ IF EXISTS ]  { clusteringAction | tableColumnAction | constraintAction  }

ALTER TABLE [ IF EXISTS ]  dataMetricFunctionAction

ALTER TABLE [ IF EXISTS ]  dataGovnPolicyTagAction

ALTER TABLE [ IF EXISTS ]  extTableColumnAction

ALTER TABLE [ IF EXISTS ]  searchOptimizationAction

ALTER TABLE [ IF EXISTS ]  SET
  [ DATA_RETENTION_TIME_IN_DAYS =  ]
  [ MAX_DATA_EXTENSION_TIME_IN_DAYS =  ]
  [ CHANGE_TRACKING = { TRUE | FALSE  } ]
  [ DEFAULT_DDL_COLLATION = '' ]
  [ ENABLE_SCHEMA_EVOLUTION = { TRUE | FALSE } ]
  [ CONTACT (  =  [ ,  =  ... ] ) ]
  [ COMMENT = '' ]

ALTER TABLE [ IF EXISTS ]  UNSET {
                                       DATA_RETENTION_TIME_IN_DAYS         |
                                       MAX_DATA_EXTENSION_TIME_IN_DAYS     |
                                       CHANGE_TRACKING                     |
                                       DEFAULT_DDL_COLLATION               |
                                       ENABLE_SCHEMA_EVOLUTION             |
                                       CONTACT                    |
                                       COMMENT                             |
                                       }
                                       [ , ... ]
Copy

Where:

clusteringAction ::=
  {
     CLUSTER BY (  [ ,  , ... ] )
     /* RECLUSTER is deprecated */
   | RECLUSTER [ MAX_SIZE =  ] [ WHERE  ]
     /* { SUSPEND | RESUME } RECLUSTER is valid action */
   | { SUSPEND | RESUME } RECLUSTER
   | DROP CLUSTERING KEY
  }
Copy
tableColumnAction ::=
  {
     ADD [ COLUMN ] [ IF NOT EXISTS ]  
        [
           {
              DEFAULT 
              | { AUTOINCREMENT | IDENTITY }
                 /* AUTOINCREMENT (or IDENTITY) is supported only for           */
                 /* columns with numeric data types (NUMBER, INT, FLOAT, etc.). */
                 /* Also, if the table is not empty (that is, if the table contains */
                 /* any rows), only DEFAULT can be altered.                     */
                 [
                    {
                       (  ,  )
                       | START  INCREMENT 
                    }
                 ]
                 [  { ORDER | NOORDER } ]
           }
        ]
        [ inlineConstraint ]
        [ COLLATE '' ]

   | RENAME COLUMN  TO 

   | ALTER | MODIFY [ ( ]
                            [ COLUMN ]  DROP DEFAULT
                          , [ COLUMN ]  SET DEFAULT .NEXTVAL
                          , [ COLUMN ]  { [ SET ] NOT NULL | DROP NOT NULL }
                          , [ COLUMN ]  [ [ SET DATA ] TYPE ] 
                          , [ COLUMN ]  COMMENT ''
                          , [ COLUMN ]  UNSET COMMENT
                        [ , [ COLUMN ]  ... ]
                        [ , ... ]
                    [ ) ]

   | DROP [ COLUMN ] [ IF EXISTS ]  [,  ... ]
  }

  inlineConstraint ::=
    [ NOT NULL ]
    [ CONSTRAINT  ]
    { UNIQUE | PRIMARY KEY | { [ FOREIGN KEY ] REFERENCES  [ (  ) ] } }
    [  ]
Copy

For detailed syntax and examples for altering columns, see ALTER TABLE … ALTER COLUMN. .

For detailed syntax and examples for creating/altering inline constraints, see CREATE | ALTER TABLE … CONSTRAINT.

dataMetricFunctionAction ::=

    SET DATA_METRIC_SCHEDULE = {
        ' MINUTE'
      | 'USING CRON  '
      | 'TRIGGER_ON_CHANGES'
    }

  | UNSET DATA_METRIC_SCHEDULE

  | { ADD | DROP } DATA METRIC FUNCTION 
      ON (  [ , ... ]
      [ , TABLE (  [ , ... ] ) ] )
      [ ,  ON (  [ , ... ]
        [ , TABLE (  [ , ... ] ) ] ) ]

  | MODIFY DATA METRIC FUNCTION 
      ON (  [ , ... ]
      [ , TABLE (  [ , ... ] ) ] ) { SUSPEND | RESUME }
      [ ,  ON (  [ , ... ]
        [ , TABLE (  [ , ... ] ) ] ) { SUSPEND | RESUME } ]
Copy
dataGovnPolicyTagAction ::=
  {
      SET TAG  = '' [ ,  = '' ... ]
    | UNSET TAG  [ ,  ... ]
  }
  |
  {
      ADD ROW ACCESS POLICY  ON (  [ , ... ] )
    | DROP ROW ACCESS POLICY 
    | DROP ROW ACCESS POLICY  ,
        ADD ROW ACCESS POLICY  ON (  [ , ... ] )
    | DROP ALL ROW ACCESS POLICIES
  }
  |
  {
      SET AGGREGATION POLICY 
        [ ENTITY KEY (  [, ... ] ) ]
        [ FORCE ]
    | UNSET AGGREGATION POLICY
  }
  |
  {
      SET JOIN POLICY 
        [ FORCE ]
    | UNSET JOIN POLICY
  }
  |
  ADD [ COLUMN ] [ IF NOT EXISTS ]  
    [ [ WITH ] MASKING POLICY 
          [ USING (  ,  , ... ) ] ]
    [ [ WITH ] PROJECTION POLICY  ]
    [ [ WITH ] TAG (  = ''
          [ ,  = '' , ... ] ) ]
  |
  {
    { ALTER | MODIFY } [ COLUMN ] 
        SET MASKING POLICY 
          [ USING (  ,  , ... ) ] [ FORCE ]
      | UNSET MASKING POLICY
  }
  |
  {
    { ALTER | MODIFY } [ COLUMN ] 
        SET PROJECTION POLICY 
          [ FORCE ]
      | UNSET PROJECTION POLICY
  }
  |
  { ALTER | MODIFY } [ COLUMN ]  SET TAG
       = '' [ ,  = '' ... ]
      , [ COLUMN ]  SET TAG
           = '' [ ,  = '' ... ]
  |
  { ALTER | MODIFY } [ COLUMN ]  UNSET TAG  [ ,  ... ]
                   , [ COLUMN ]  UNSET TAG  [ ,  ... ]
Copy
extTableColumnAction ::=
  {
     ADD [ COLUMN ] [ IF NOT EXISTS ]   AS (  )

   | RENAME COLUMN  TO 

   | DROP [ COLUMN ] [ IF EXISTS ]  [,  ... ]
  }
Copy
constraintAction ::=
  {
     ADD outoflineConstraint
   | RENAME CONSTRAINT  TO 
   | { ALTER | MODIFY } { CONSTRAINT  | PRIMARY KEY | UNIQUE | FOREIGN KEY } (  [ , ... ] )
                         [ [ NOT ] ENFORCED ] [ VALIDATE | NOVALIDATE ] [ RELY | NORELY ]
   | DROP { CONSTRAINT  | PRIMARY KEY | UNIQUE | FOREIGN KEY } (  [ , ... ] )
                         [ CASCADE | RESTRICT ]
  }

  outoflineConstraint ::=
    [ CONSTRAINT  ]
    {
       UNIQUE [ (  [ ,  , ... ] ) ]
     | PRIMARY KEY [ (  [ ,  , ... ] ) ]
     | [ FOREIGN KEY ] [ (  [ ,  , ... ] ) ]
                          REFERENCES  [ (  [ ,  , ... ] ) ]
    }
    [  ]
Copy

For detailed syntax and examples for creating/altering out-of-line constraints, see CREATE | ALTER TABLE … CONSTRAINT.

searchOptimizationAction ::=
  {
     ADD SEARCH OPTIMIZATION [
       ON  [ ,  ... ]
     ]

   | DROP SEARCH OPTIMIZATION [
       ON {  |  |  }
          [ , ... ]
     ]
  }
Copy

For details, see Search optimization actions (searchOptimizationAction).

Parameters

name

Identifier for the table to alter. If the identifier contains spaces or special characters, the entire string must be enclosed in double quotes. Identifiers enclosed in double quotes are also case sensitive.

RENAME TO new_table_name

Renames the specified table with a new identifier that is not currently used by any other tables in the schema.

For more information about table identifiers, see Identifier requirements.

You can move the object to a different database and/or schema while optionally renaming the object. To do so, specify a qualified new_name value that includes the new database and/or schema name in the form db_name.schema_name.object_name or schema_name.object_name, respectively.

Note

  • The destination database and/or schema must already exist. In addition, an object with the same name cannot already exist in the new location; otherwise, the statement returns an error.

  • Moving an object to a managed access schema is prohibited unless the object owner (that is, the role that has the OWNERSHIP privilege on the object) also owns the target schema.

When an object (table, column, etc.) is renamed, other objects that reference it must be updated with the new name.

SWAP WITH target_table_name

Swap renames two tables in a single transaction.

Note that swapping a permanent or transient table with a temporary table, which persists only for the duration of the user session in which it was created, is not allowed. This restriction prevents a naming conflict that could occur when a temporary table is swapped with a permanent or transient table, and an existing permanent or transient table has the same name as the temporary table. To swap a permanent or transient table with a temporary table, use three ALTER TABLE ... RENAME TO statements: Rename table a to c, b to a, and then c to b.

Note

To rename a table or swap two tables, the role used to perform the operation must have OWNERSHIP privileges on the table(s). In addition, renaming a table requires the CREATE TABLE privilege on the schema for the table.

SET ...

Specifies one or more properties/parameters to set for the table (separated by blank spaces, commas, or new lines):

DATA_RETENTION_TIME_IN_DAYS = integer

Object-level parameter that modifies the retention period for the table for Time Travel. For more information, see Understanding & using Time Travel and Working with Temporary and Transient Tables.

For a detailed description of this parameter, as well as more information about object parameters, see Parameters.

Values:

  • Standard Edition: 0 or 1

  • Enterprise Edition:

    • 0 to 90 for permanent tables

    • 0 or 1 for temporary and transient tables

Note

A value of 0 effectively disables Time Travel for the table.

MAX_DATA_EXTENSION_TIME_IN_DAYS = integer

Object parameter that specifies the maximum number of days for which Snowflake can extend the data retention period for the table to prevent streams on the table from becoming stale.

For a detailed description of this parameter, see MAX_DATA_EXTENSION_TIME_IN_DAYS.

CHANGE_TRACKING = TRUE | FALSE

Specifies to enable or disable change tracking on the table.

  • TRUE enables change tracking on the table. This option adds several hidden columns to the source table and begins storing change tracking metadata in the columns. These columns consume a small amount of storage.

    The change tracking metadata can be queried using the CHANGES clause for SELECT statements, or by creating and querying one or more streams on the table.

  • FALSE disables change tracking on the table. Associated hidden columns are dropped from the table.

DEFAULT_DDL_COLLATION = 'collation_specification'

Specifies a default collation specification for any new columns added to the table.

Setting the parameter does not change the collation specification for any existing columns.

For more information about the parameter, see DEFAULT_DDL_COLLATION.

ENABLE_SCHEMA_EVOLUTION = { TRUE | FALSE }

Enables or disables automatic changes to the table schema from data loaded into the table from source files, including:

  • Added columns.

    By default, schema evolution is limited to a maximum of 10 added columns per load operation. To request more than 10 added columns per load operation, contact Snowflake Support.

  • The NOT NULL constraint can be dropped from any number of columns missing in new data files.

Setting it to TRUE enables automatic table schema evolution. The default FALSE disables automatic table schema evolution.

Note

Loading data from files evolves the table columns when all of the following are true:

  • The COPY INTO

    statement includes the MATCH_BY_COLUMN_NAME option.

  • The role used to load the data has the EVOLVE SCHEMA or OWNERSHIP privilege on the table.

  • Additionally, for schema evolution with CSV, when used with MATCH_BY_COLUMN_NAME and PARSE_HEADER, ERROR_ON_COLUMN_COUNT_MISMATCH must be set to false.

    CONTACT ( purpose = contact [ , purpose = contact ... ] )

    Associate the existing object with one or more contacts.

    COMMENT = 'string_literal'

    Adds a comment or overwrites the existing comment for the table.

    Note

    Do not specify copy options using the CREATE STAGE, ALTER STAGE, CREATE TABLE, or ALTER TABLE commands. We recommend that you use the COPY INTO

    command to specify copy options.

    UNSET ...

    Specifies one or more properties/parameters to unset for the table, which resets them back to their defaults:

    • DATA_RETENTION_TIME_IN_DAYS

    • MAX_DATA_EXTENSION_TIME_IN_DAYS

    • CHANGE_TRACKING

    • DEFAULT_DDL_COLLATION

    • ENABLE_SCHEMA_EVOLUTION

    • CONTACT purpose

    • COMMENT

    Clustering actions (clusteringAction)

    CLUSTER BY ( expr [ , expr , ... ] )

    Specifies (or modifies) one or more table columns or column expressions as the clustering key for the table. These are the columns/expressions for which clustering is maintained by Automatic Clustering.

    Important

    Clustering keys are not intended or recommended for all tables; they typically benefit very large (that is, multi-terabyte) tables.

    Before you specify a clustering key for a table, please see Understanding Snowflake Table Structures.

    RECLUSTER ...

    Deprecated

    Performs manual, incremental reclustering of a table that has a clustering key defined:

    MAX_SIZE = budget_in_bytes

    Deprecated — use a larger warehouse to achieve more effective manual reclustering

    Specifies the upper-limit on the amount of data (in bytes) in the table to recluster.

    WHERE condition

    Specifies a condition or range on which to recluster data in the table.

    Note

    Only roles with the OWNERSHIP or INSERT privilege on a table can recluster the table.

    SUSPEND | RESUME RECLUSTER

    Enables or disables Automatic Clustering for the table.

    DROP CLUSTERING KEY

    Drops the clustering key for the table.

    For more information about clustering keys and reclustering, see Understanding Snowflake Table Structures.

    Table column actions (tableColumnAction)

    ADD [ COLUMN ] [ IF NOT EXISTS ] col_name col_data_type . [ DEFAULT default_value | AUTOINCREMENT ... ] . [ inlineConstraint ] [ COLLATE 'collation_specification' ] . [ [ WITH ] MASKING POLICY policy_name ] . [ [ WITH ] PROJECTION POLICY policy_name ] . [ [ WITH ] TAG ( tag_name = 'tag_value' [ , tag_name = 'tag_value' , ... ] ) ] [ , ...]

    Adds a new column. You can specify a default value, an inline constraint, a collation specification, a masking policy, and/or one or more tags.

    A default value for a column that you are adding must be a literal value; it cannot be an expression or a value returned by a function. For example, the following command returns an expected error:

    ALTER TABLE t1 ADD COLUMN c5 VARCHAR DEFAULT 12345::VARCHAR;
    
    Copy
    002263 (22000): SQL compilation error:
    Invalid column default expression [CAST(12345 AS VARCHAR(16777216))]
    

    When you first create a table, you can use expressions as default values, but not when you add columns.

    The default value for a column must match the data type of the column. An attempt to set a default value with a non-matching data type fails with an error. For example:

    ALTER TABLE t1 ADD COLUMN c6 DATE DEFAULT '20230101';
    
    Copy
    002023 (22000): SQL compilation error:
    Expression type does not match column data type, expecting DATE but got VARCHAR(8) for column C6
    

    For additional details about table column actions, see:

    ADD COLUMN operations can be performed on multiple columns in the same command.

    If you are not sure if the column already exists, you can specify IF NOT EXISTS when adding the column. If the column already exists, ADD COLUMN has no effect on the existing column and does not result in an error.

    Note

    You cannot specify IF NOT EXISTS if you are also specifying any of the following for the new column:

    • DEFAULT, AUTOINCREMENT, or IDENTITY

    • UNIQUE, PRIMARY KEY, or FOREIGN KEY

    RENAME COLUMN col_name to new_col_name

    Renames the specified column to a new name that is not currently used for any other columns in the table.

    You cannot rename a column that is part of a clustering key.

    When an object (table, column, etc.) is renamed, other objects that reference it must be updated with the new name.

    DROP COLUMN [ IF EXISTS ] col_name [ CASCADE | RESTRICT ]

    Removes the specified column from the table.

    If you are not sure if the column already exists, you can specify IF EXISTS when dropping the column. If the column does not exist, DROP COLUMN has no effect and does not result in an error.

    Dropping a column is a metadata-only operation. It does not immediately re-write the micro-partition(s) and therefore does not immediately free up the space used by the column. Typically, the space within an individual micro-partition is freed the next time that the micro-partition is re-written, which is typically when a write is done either due to DML (INSERT, UPDATE, DELETE) or re-clustering.

    Data metric function actions (dataMetricFunctionAction)

    DATA_METRIC_SCHEDULE ...

    Specifies the schedule to run the data metric function periodically.

    'num MINUTE'

    Specifies an interval (in minutes) of wait time inserted between runs of the data metric function. Accepts positive integers only.

    Also supports num M syntax.

    For data metric functions, use one of the following values: 5, 15, 30, 60, 720, or 1440.

    'USING CRON expr time_zone'

    Specifies a cron expression and time zone for periodically running the data metric function. Supports a subset of standard cron utility syntax.

    For a list of time zones, see the list of tz database time zones.

    The cron expression consists of the following fields, and the periodic interval must be at least 5 minutes:

    # __________ minute (0-59)
    # | ________ hour (0-23)
    # | | ______ day of month (1-31, or L)
    # | | | ____ month (1-12, JAN-DEC)
    # | | | | _ day of week (0-6, SUN-SAT, or L)
    # | | | | |
    # | | | | |
      * * * * *
    
    Copy

    The following special characters are supported:

    *

    Wildcard. Specifies any occurrence of the field.

    L

    Stands for “last”. When used in the day-of-week field, it allows you to specify constructs such as “the last Friday” (“5L”) of a given month. In the day-of-month field, it specifies the last day of the month.

    /{n}

    Indicates the nth instance of a given unit of time. Each quanta of time is computed independently. For example, if 4/3 is specified in the month field, then the data metric function is scheduled for April, July and October (i.e. every 3 months, starting with the 4th month of the year). The same schedule is maintained in subsequent years. That is, the data metric function is not scheduled to run in January (3 months after the October run).

    Note

    • The cron expression currently evaluates against the specified time zone only. Altering the TIMEZONE parameter value for the account (or setting the value at the user or session level) does not change the time zone for the data metric function.

    • The cron expression defines all valid run times for the data metric function. Snowflake attempts to run a data metric function based on this schedule; however, any valid run time is skipped if a previous run has not completed before the next valid run time starts.

    • When both a specific day of month and day of week are included in the cron expression, then the data metric function is scheduled on days satisfying either the day of month or day of week. For example, DATA_METRIC_SCHEDULE = 'USING CRON 0 0 10-20 * TUE,THU UTC' schedules a data metric function at 0AM on any 10th to 20th day of the month and also on any Tuesday or Thursday outside of those dates.

    • The shortest granularity of time in cron is minutes.

      If a data metric function is resumed during the minute defined in its cron expression, the first scheduled run of the data metric function is the next occurrence of the instance of the cron expression. For example, if data metric function scheduled to run daily at midnight (USING CRON 0 0 * * *) is resumed at midnight plus 5 seconds (00:00:05), the first data metric function run is scheduled for the following midnight.

    'TRIGGER_ON_CHANGES'

    Specifies that the DMF runs when a DML operation modifies the table, such as inserting a new row or deleting a row.

    You can specify 'TRIGGER_ON_CHANGES' for the following objects:

    • Dynamic tables

    • External tables

    • Apache Iceberg™ tables

    • Regular tables

    • Temporary tables

    • Transient tables

    You cannot specify 'TRIGGER_ON_CHANGES' for views.

    Changes to the table as a result of reclustering do not trigger the DMF to run.

    { ADD | DROP } DATA METRIC FUNCTION metric_name

    Identifier of the data metric function to add to the table or view or drop from the table or view.

    ON ( col_name [ , ... ] [ , TABLE( table_name( col_name [ , ... ] ) ) ] )

    The table or view columns on which to associate the data metric function. The data types of the columns must match the data types of the columns specified in the data metric function definition.

    If the data metric function accepts a second table as an argument, specify the fully qualified name of the table and its columns.

    [ , metric_name_2 ON ( col_name [ , ... ] [ , TABLE( table_name( col_name [ , ... ] ) ) ] ) ]

    Additional data metric functions to add to the table or view. Use a comma to separate each data metric function and its specified columns.

    If the data metric function accepts a second table as an argument, specify the fully qualified name of the table and its columns.

    MODIFY DATA METRIC FUNCTION metric_name

    Identifier of the data metric function to modify.

    ON ( col_name [ , ... ] [ , TABLE( table_name( col_name [ , ... ] ) ) ] ) { SUSPEND | RESUME }

    Suspends or resumes the data metric function on the specified columns. When a data metric function is set for a table or view, the data metric function is automatically included in the schedule. If the data metric function accepts a second table as an argument, specify the fully qualified name of the table and its columns.

    • SUSPEND removes the data metric function from the schedule.

    • RESUME brings a suspended date metric function back into the schedule.

    [ , metric_name_2 ON ( col_name [ , ... ] [ , TABLE(col_name [ , ... ] ) ] ) { SUSPEND | RESUME } ]

    Additional data metric functions to suspend or resume. Use a comma to separate each data metric function and its specified columns. If the data metric function accepts a second table as an argument, specify the fully qualified name of the table and its columns.

    For details about the access control requirements for these actions, see DMF privileges.

    External table column actions (extTableColumnAction)

    For all other external table modifications, see ALTER EXTERNAL TABLE.

    ADD [ COLUMN ] [ IF NOT EXISTS ] AS ( ) [, ...]

    Adds a new column to the external table.

    If you are not sure if the column already exists, you can specify IF NOT EXISTS when adding the column. If the column already exists, ADD COLUMN has no effect on the existing column and does not result in an error.

    This operation can be performed on multiple columns in the same command.

    col_name

    String that specifies the column identifier (that is, name). All the requirements for table identifiers also apply to column identifiers.

    For more information, see Identifier requirements.

    col_type

    String (constant) that specifies the data type for the column. The data type must match the result of expr for the column.

    For details about the data types that can be specified for table columns, see SQL data types reference.

    expr

    String that specifies the expression for the column. When queried, the column returns results derived from this expression.

    External table columns are virtual columns, which are defined using an explicit expression. Add virtual columns as expressions using the VALUE column and/or the METADATA$FILENAME pseudocolumn:

    VALUE:

    A VARIANT type column that represents a single row in the external file.

    CSV:

    The VALUE column structures each row as an object with elements identified by column position (that is, {c1: , c2: , c3: ...}).

    For example, add a VARCHAR column named mycol that references the first column in the staged CSV files:

    mycol varchar as (value:c1::varchar)
    
    Copy
    Semi-structured data:

    Enclose element names and values in double-quotes. Traverse the path in the VALUE column using dot notation.

    For example, suppose the following represents a single row of semi-structured data in a staged file:

    { "a":"1", "b": { "c":"2", "d":"3" } }
    
    Copy

    Add a VARCHAR column named mycol that references the nested repeating c element in the staged file:

    mycol varchar as (value:"b"."c"::varchar)
    
    Copy
    METADATA$FILENAME:

    A pseudocolumn that identifies the name of each staged data file included in the external table, including its path in the stage.

    RENAME COLUMN col_name to new_col_name

    Renames the specified column to a new name that is not currently used for any other columns in the external table.

    DROP COLUMN [ IF EXISTS ] col_name

    Removes the specified column from the external table.

    If you are not sure if the column already exists, you can specify IF EXISTS when dropping the column. If the column does not exist, DROP COLUMN has no effect and does not result in an error.

    Constraint actions (constraintAction)

    ADD CONSTRAINT

    Adds an out-of-line integrity constraint to one or more columns in the table. To add an inline constraint (for a column), see Column Actions (in this topic).

    RENAME CONSTRAINT constraint_name TO new_constraint_name

    Renames the specified constraint.

    ALTER | MODIFY CONSTRAINT ...

    Alters the properties for the specified constraint.

    DROP CONSTRAINT constraint_name | PRIMARY KEY | UNIQUE | FOREIGN KEY ( col_name [ , ... ] ) [ CASCADE | RESTRICT ]

    Drops the specified constraint for the specified column or set of columns.

    For detailed syntax and examples for adding or altering constraints, see CREATE | ALTER TABLE … CONSTRAINT.

    Data Governance policy and tag actions (dataGovnPolicyTagAction)

    TAG tag_name = 'tag_value' [ , tag_name = 'tag_value' , ... ]

    Specifies the tag name and the tag string value.

    The tag value is always a string, and the maximum number of characters for the tag value is 256.

    For information about specifying tags in a statement, see Tag quota for objects.

    policy_name

    Identifier for the policy; must be unique for your schema.

    The following clauses apply to all table kinds that support row access policies, such as but not limited to tables, views, and event tables. To simplify, the clauses just refer to “table.”

    ADD ROW ACCESS POLICY policy_name ON (col_name [ , ... ])

    Adds a row access policy to the table.

    At least one column name must be specified. Additional columns can be specified with a comma separating each column name. Use this expression to add a row access policy to both an event table and an external table.

    DROP ROW ACCESS POLICY policy_name

    Drops a row access policy from the table.

    Use this clause to drop the policy from the table.

    DROP ROW ACCESS POLICY policy_name, ADD ROW ACCESS POLICY policy_name ON ( col_name [ , ... ] )

    Drops the row access policy that is set on the table and adds a row access policy to the same table in a single SQL statement.

    DROP ALL ROW ACCESS POLICIES

    Drops all row access policy associations from the table.

    This expression is helpful when a row access policy is dropped from a schema before dropping the policy from an event table. Use this expression to drop row access policy associations from the table.

    SET AGGREGATION POLICY policy_name
    [ ENTITY KEY (col_name [ , ... ]) ] [ FORCE ]

    Assigns an aggregation policy to the table.

    Use the optional ENTITY KEY parameter to define which columns uniquely identity an entity within the table. For more information, see Implementing entity-level privacy with aggregation policies.

    Use the optional FORCE parameter to atomically replace an existing aggregation policy with the new aggregation policy.

    UNSET AGGREGATION POLICY

    Detaches an aggregation policy from the table.

    SET JOIN POLICY policy_name
    [ FORCE ]

    Assigns a join policy to the table.

    Use the optional FORCE parameter to atomically replace an existing join policy with the new join policy.

    UNSET JOIN POLICY

    Detaches a join policy from the table.

    { ALTER | MODIFY } [ COLUMN ] ...
    USING ( col_name , cond_col_1 ... )

    Specifies the arguments to pass into the conditional masking policy SQL expression.

    The first column in the list specifies the column for the policy conditions to mask or tokenize the data and must match the column to which the masking policy is set.

    The additional columns specify the columns to evaluate to determine whether to mask or tokenize the data in each row of the query result when a query is made on the first column.

    If the USING clause is omitted, Snowflake treats the conditional masking policy as a normal masking policy.

    FORCE

    Replaces a masking or projection policy that is currently set on a column with a different policy in a single statement.

    Note that using the FORCE keyword with a masking policy requires the data type of the policy in the ALTER TABLE statement (i.e. STRING) to match the data type of the masking policy currently set on the column (i.e. STRING).

    If a masking policy is not currently set on the column, specifying this keyword has no effect.

    For details, see: Replace a masking policy on a column or Replace a projection policy.

    Search optimization actions (searchOptimizationAction)

    ADD SEARCH OPTIMIZATION

    Adds search optimization for the entire table or, if you specify the optional ON clause, for specific columns.

    Note

    • Search optimization can be expensive to maintain, especially if the data in the table changes frequently. For more information, see Search optimization cost estimation and management.

    • If you try to add search optimization on a materialized view, Snowflake returns an error message.

    ON search_method_with_target [, search_method_with_target ... ]

    Specifies that you want to configure search optimization for specific columns or VARIANT fields (instead of the entire table).

    For search_method_with_target, use an expression with the following syntax:

    (  [ ,  , ... ] [ , ANALYZER => '' ] )
    
    Copy

    Where:

    • search_method specifies one of the following methods that optimizes queries for a particular type of predicate:

    Search method

    Description

    FULL_TEXT

    Predicates that use VARCHAR (text), VARIANT, ARRAY, and OBJECT types.

    EQUALITY

    Equality and IN predicates.

    SUBSTRING

    Predicates that match substrings and regular expressions (for example, [ NOT ] LIKE, [ NOT ] ILIKE, [ NOT ] RLIKE, and REGEXP_LIKE).

    GEO

    Predicates that use GEOGRAPHY types.

  • target specifies the column, VARIANT field, or an asterisk (*).

    Depending on the value of search_method, you can specify a column or VARIANT field of one of the following types:

    Search method

    Supported targets

    FULL_TEXT

    Columns of VARCHAR (text), VARIANT, ARRAY, and OBJECT data types, including paths to fields in VARIANTs.

    EQUALITY

    Columns of numerical, string, binary, and VARIANT data types, including paths to fields in VARIANTs.

    SUBSTRING

    Columns of string or VARIANT data types, including paths to fields in VARIANTs. Specify paths to fields as described above under EQUALITY; searches on nested fields are improved in the same way.

    GEO

    Columns of the GEOGRAPHY data type.

    To specify a VARIANT field, use dot or bracket notation (for example, my_column:my_field_name.my_nested_field_name or my_column['my_field_name']['my_nested_field_name']). You can also use a colon-delimited path to the field (for example, my_column:my_field_name:my_nested_field_name).

    When you specify a VARIANT field, the configuration applies to all nested fields under that field. For example, if you specify ON EQUALITY(src:a.b):

    • This configuration can improve queries on src:a.b and on any nested fields (for example, src:a.b.c, src:a.b.c.d, etc.).

    • This configuration does not affect queries that do not use the src:a.b prefix (for example, src:a, src:z, etc.).

    To specify all applicable columns in the table as targets, use an asterisk (*).

    Note that you cannot specify both an asterisk and specific column names for a given search method. However, you can specify an asterisk in different search methods.

    For example, you can specify the following expressions:

    -- Allowed
    ON SUBSTRING(*)
    ON EQUALITY(*), SUBSTRING(*), GEO(*)
    
    Copy

    You cannot specify the following expressions:

    -- Not allowed
    ON EQUALITY(*, c1)
    ON EQUALITY(c1, *)
    ON EQUALITY(v1:path, *)
    ON EQUALITY(c1), EQUALITY(*)
    
    Copy
  • ANALYZER => 'analyzer_name' specifies the name of the text analyzer, if search_method is FULL_TEXT.

    When the FULL_TEXT search method is used and queries are executed with the SEARCH or SEARCH_IP function, the analyzer breaks the search terms (and the text from the column being searched) into tokens. A row matches if any of the tokens extracted from the search string matches a token extracted from any of the columns or fields being searched. The analyzer isn’t relevant when the FULL_TEXT search method isn’t used or for queries that don’t use the SEARCH or SEARCH_IP function.

    The analyzer tokenizes a string by breaking it where it finds certain delimiters. These delimiters are not included in the resulting tokens, and empty tokens are not extracted.

    This parameter accepts one of the following values:

    • DEFAULT_ANALYZER: Breaks text into tokens based on the following delimiters:

      Character

      Unicode code

      Description

      U+0020

      Space

      [

      U+005B

      Left square bracket

      ]

      U+005D

      Right square bracket

      ;

      U+003B

      Semicolon

      <

      U+003C

      Less-than sign

      >

      U+003E

      Greater-than sign

      (

      U+0028

      Left parenthesis

      )

      U+0029

      Right parenthesis

      {

      U+007B

      Left curly bracket

      }

      U+007D

      Right curly bracket

      |

      U+007C

      Vertical bar

      !

      U+0021

      Exclamation mark

      ,

      U+002C

      Comma

      '

      U+0027

      Apostrophe

      "

      U+0022

      Quotation mark

      *

      U+002A

      Asterisk

      &

      U+0026

      Ampersand

      ?

      U+003F

      Question mark

      +

      U+002B

      Plus sign

      /

      U+002F

      Slash

      :

      U+003A

      Colon

      =

      U+003D

      Equal sign

      @

      U+0040

      At sign

      .

      U+002E

      Period (full stop)

      -

      U+002D

      Hyphen

      $

      U+0024

      Dollar sign

      %

      U+0025

      Percent sign

      \

      U+005C

      Backslash

      _

      U+005F

      Underscore (low line)

      \n

      U+000A

      New line (line feed)

      \r

      U+000D

      Carriage return

      \t

      U+0009

      Horizontal tab

    • UNICODE_ANALYZER: Tokenizes based on Unicode segmentation rules that treat spaces and certain punctuation characters as delimiters. These internal rules are designed for natural language searches (in many different languages). For example, the default analyzer treats periods in IP addresses and apostrophes in contractions as delimiters, but the Unicode analyzer does not. See Using an analyzer to adjust search behavior.

      For more information about the Unicode Text Segmentation algorithm, see https://unicode.org/reports/tr29/.

    • NO_OP_ANALYZER: Tokenizes neither the data nor the query string. A search term must exactly match the full text in a column or field, including case sensitivity; otherwise, the SEARCH function returns FALSE. Even if the query string looks like it contains multiple tokens (for example, 'sky blue'), the column or field must equal the entire query string exactly. In this case, only 'sky blue' is a match; 'sky' and 'blue' are not matches.

    • ENTITY_ANALYZER: Tokenizes the data for IP address searches.

      This analyzer is used only for queries executed with the SEARCH_IP function.

To specify more than one search method on a target, use a comma to separate each subsequent method and target:

ALTER TABLE t1 ADD SEARCH OPTIMIZATION ON EQUALITY(c1), EQUALITY(c2, c3);
Copy

If you run the ALTER TABLE … ADD SEARCH OPTIMIZATION ON … command multiple times on the same table, each subsequent command adds to the existing configuration for the table. For example, suppose that you run the following commands:

ALTER TABLE t1 ADD SEARCH OPTIMIZATION ON EQUALITY(c1, c2);
ALTER TABLE t1 ADD SEARCH OPTIMIZATION ON EQUALITY(c3, c4);
Copy

This adds equality predicates for the columns c1, c2, c3, and c4 to the configuration for the table. This is equivalent to running the command:

ALTER TABLE t1 ADD SEARCH OPTIMIZATION ON EQUALITY(c1, c2, c3, c4);
Copy

For examples, see Enabling search optimization for specific columns.

DROP SEARCH OPTIMIZATION

Removes search optimization for the entire table or, if you specify the optional ON clause, from specific columns.

Note

  • If a table has the search optimization property, then dropping the table and undropping it preserves the search optimization property.

  • Removing the search optimization property from a table and then adding it back incurs the same cost as adding it the first time.

ON search_method_with_target | column_name | expression_id [ , ... ]

Specifies that you want to drop the search optimization configuration for specific columns or VARIANT fields (instead of dropping search optimization for the entire table).

To identify the column configuration to drop, specify one of the following:

  • For search_method_with_target, specify a method for optimizing queries for one or more specific targets, which can be columns or VARIANT fields. Use the syntax described earlier.

  • For column_name, specify the name of the column configured for search optimization. Specifying the column name drops all expressions for that column, including expressions that use VARIANT fields in the column.

  • For expression_id, specify the ID for an expression listed in the output of the DESCRIBE SEARCH OPTIMIZATION command.

To specify more than one of these, use a comma between items.

You can specify any combination of search methods with targets, column names, and expression IDs.

For examples, see Dropping search optimization for specific columns.

Usage notes: General

  • Changes to a table are not automatically propagated to views created on that table. For example, if you drop a column in a table, and a view is defined to include that column, the view becomes invalid; the view is not adjusted to remove the column.

  • Dropping a column does not immediately free up the column’s storage space.

    • The space in each micro-partition is not reclaimed until that micro-partition is re-written. Write operations (insert, update, delete, etc.) on 1 or more rows in that micro-partition cause the micro-partition to be re-written. If you want to force space to be reclaimed, you can follow these steps:

      1. Use a CREATE TABLE AS SELECT (CTAS) statement to create a new table that contains only the columns of the old table you want to keep.

      2. Set the DATA_RETENTION_TIME_IN_DAYS parameter to 0 for the old table (optional).

      3. Drop the old table.

    • If the table is protected by the Time Travel feature, the space used by the Time Travel storage is not reclaimed until the Time Travel retention period expires.

  • If a new column with a default value is added to a table with existing rows, all of the existing rows are populated with the default value.

  • Adding a new column with a default value containing a function is not currently supported. The following error is returned:

    Invalid column default expression (expr)

  • To alter a table, you must be using a role that has ownership privilege on the table.

  • To add clustering to a table, you must also have USAGE or OWNERSHIP privileges on the schema and database that contain the table.

  • For masking policies:

    • The USING clause and the FORCE keyword are both optional; neither are required to set a masking policy on a column. The USING clause and the FORCE keyword can be used separately or together. For details, see:

    • A single masking policy that uses conditional columns can be applied to multiple tables provided that the column structure of the table matches the columns specified in the policy.

    • When modifying one or more table columns with a masking policy or the table itself with a row access policy, use the POLICY_CONTEXT function to simulate a query on the column(s) protected by a masking policy and the table protected by a row access policy.