Updating statistics on large tables can take a while. A page-compressed warehouse fact table with 19 billion rows, 1000 partitions, a sensible clustered index, and 35 columns occupying in excess of 1TB disk space on my server takes around 22 hours to update its 33 stats objects using the default sample size. Sure you can do several tables in parallel, but you can’t update the various stats on a single table in parallel using supported functionality.
So the next step is usually to tinker with sample size, spread the update across multiple maintenance windows, switch older partitions out to an archive table, or perhaps leverage filtered stats (conditions apply) or incremental stats (from SQL 2014). This blog post looks at a more radical approach – attempting to update those 33 stats “in parallel” to get that 22 hour binge down to the length of a reasonable coffee break.