When it comes to considering Snowflake's storage concepts, there are four areas the SnowPro Core exam expects you to know about.
- Micro-partitions
- Column metadata clustering
- Data storage monitoring
- Search optimization service
This post will take a look at how to monitor data storage. The main point of this is to manage your spending, so you're not needlessly paying out for data storage.
Continuous Data Protection (CDP)
CDP is automatically included with your Snowflake account. It's built into every type of Snowflake account, from Standard to VPS. In reality, CDP is a title which groups a number of key data protection concepts together:
- Network policies
- Multi-factor authentication and federated authentication
- Access control
- End-to-end encryption
- Time Travel
- Fail-safe
Even though CDP is built into all Snowflake editions, not all CDP features are available to all editions, and sometimes in a different form. For instance, Time Travel supports 1 day in Standard Edition, but 90 days in Enterprise Edition. This is great if somebody (not you, obviously!) accidentally deletes something or you need to check something from the past. More on Time Travel in the future.
Managing Storage
You need to be an ACCOUNTADMIN to manage storage for your account. ACCOUNTADMIN is the equivalent of SYSADMIN in SQL Server - you can do everything. To view the storage used and associated costs, log on to Snowsight and navigate to Admin > Cost Management > Consumption. Once you're there, choose Storage as the Usage Type to view your storage details.

Type of Data Storage
There are different data storage mechanisms in Snowflake.
- Table storage
Applies to individual tables. Any user who has the relevant privileges can view the data storage on a per table basis, by using the SHOW TABLES command or the TABLE_STORAGE_METRICS view. - Staged file storage
Stages are used for loading data from files. It stands to reason that these files are probably transient in terms of Snowflake - if you need to keep these files after you've finished with them in Snowflake, consider storing them locally to save on storage costs. - Cloning
Snowflake allows you to clone databases, schemas and tables. It's a nifty feature, but if you clone a 20 million row table you've just doubled the storage costs for that table! - Temporary tables
A temporary table still needs storage, but they only exist for as long as the user session lives, or for a maximum of one day - so one day is the maximum storage cost you will incur for a temporary table. - Transient tables
These are a special Snowflake feature, and we'll cover them in due course (when I'll update this particular blog entry!). These are a special type of temporary table. They exist for everybody who has access rights, but like temporary tables they can only live for a maximum of one day.
High-Churn Tables
Some dimension tables may be deemed high-churn - the rows are modified a lot. If you have a large table with constantly changing rows, you may want to assess how you manage these tables. Snowflake recommends that any tables which have excessive CDP costs should be created as transient tables with no Time Travel retention (you just set the TT period to zero to turn off Time Travel). This creates a full backup of the table, allowing you to delete older backups. Check out the Managing costs for large, high-churn tables section on the Snowflake Data Storage Considerations page for more info.
Summary
Like most things on the SnowPro Core exam, a simple little bullet point actually involves a raft of technologies! Check out some of the data storage features we've touched on and especially have a play with Time Travel - it's quite a magical feature once you've built up a bit of data and you can have a play with it.