Hey everyone. In this post I will walk you through what elastic tables are, when to use them, how to create and work with them in code, and what the critical gotchas are that you need to understand before you build anything on top of them.
Let me tell you our scenario first. A customer asks you to store IoT sensor readings in Dataverse. They have thousands of devices, each emitting data every few seconds. Or maybe they want to keep a full audit event stream, or store semi-structured JSON payloads where the schema changes depending on the event type. You start thinking about standard Dataverse tables and you quickly realize the problem. Standard tables are built on SQL Server. They are relational, transactional, and strongly consistent. Those are great properties for CRM data, but they are not designed to handle millions of rows, bursty write loads, and constantly shifting data shapes.
That is where elastic tables come in.
What Are Elastic Tables?
Elastic tables in Dataverse are backed by Azure Cosmos DB under the hood, rather than SQL Server like standard tables. That changes everything about how they behave.
Because they use Cosmos DB, they automatically scale horizontally to handle large volumes of data and high levels of read and write throughput. You do not need to provision capacity or tune indexes for massive workloads. The platform handles that.
Elastic tables also support semi-structured data naturally. You can store JSON payloads in string columns with JSON format, which means your data model does not need to be fully defined upfront. This is useful for scenarios where each record contains a different set of attributes depending on its type.
Two system columns that every elastic table has are worth knowing about from the start:
partitionid— a string column used for horizontal partitioning. This is the most important concept in elastic tables, and I will cover it in detail shortly.ttlinseconds— a time-to-live integer column. Set a value in seconds, and the row automatically expires and is deleted by Dataverse after that duration. Leave it null and the record persists indefinitely, just like a standard table row.
When to Use Elastic Tables vs. Standard Tables
The decision comes down to the nature of your data and what your application needs from the platform.
Use elastic tables when:
- Your data is unstructured or semi-structured, or your data model changes frequently
- You need to handle a high volume of reads and writes
- You have bursty workloads that spike unpredictably
- Your data has a natural expiry and you want automatic cleanup via TTL
Use standard tables when:
- Your application requires strong data consistency across operations
- You need transactional behavior across multiple tables or during plugin execution
- Your application depends on complex joins or link-entity queries
- You need plugin rollback to work as expected
That last point about rollback is one I want to emphasize. It catches people off guard, and I will cover it in its own section.
A combination of elastic and standard tables is perfectly valid. For example, you might store core CRM records in standard tables and use an elastic table to hold the event log or telemetry data that references those records.
How to Create an Elastic Table
You can create an elastic table without writing code at all, through the Power Apps maker portal, the same way you create any custom table. The only difference is that you select Elastic as the table type when creating it.
If you need to create one in code, the key is setting TableType = "Elastic" on the EntityMetadata object. If you omit this property, Dataverse defaults to Standard and creates a regular table.
Here is the SDK example for creating an elastic table called contoso_SensorData:
public static CreateEntityResponse CreateElasticTable(IOrganizationService service){ var request = new CreateEntityRequest { Entity = new EntityMetadata { SchemaName = "contoso_SensorData", DisplayName = new Label("SensorData", 1033), DisplayCollectionName = new Label("SensorData", 1033), Description = new Label("Stores IoT data emitted from devices", 1033), OwnershipType = OwnershipTypes.UserOwned, TableType = "Elastic", // This is what makes it an elastic table IsActivity = false, CanCreateCharts = new Microsoft.Xrm.Sdk.BooleanManagedProperty(false) }, PrimaryAttribute = new StringAttributeMetadata { SchemaName = "contoso_SensorType", RequiredLevel = new AttributeRequiredLevelManagedProperty(AttributeRequiredLevel.None), MaxLength = 100, FormatName = StringFormatName.Text, DisplayName = new Label("Sensor Type", 1033), Description = new Label("Type of sensor emitting data", 1033) } }; return (CreateEntityResponse)service.Execute(request);}
Column Limitations
Before you design your schema, know these restrictions. The following column types are not supported in elastic tables:
- Money (
MoneyAttributeMetadata) - Multi-select picklist
- State and Status columns
- Image columns
- Calculated, rollup, and formula columns
What you can do that standard tables cannot is store JSON data in a string column with JSON format. This is the right approach when you have complex, variable payload data that does not fit a fixed schema:
public static Guid CreateJsonAttribute(IOrganizationService service){ var request = new CreateAttributeRequest { EntityName = "contoso_sensordata", Attribute = new StringAttributeMetadata { SchemaName = "contoso_EnergyConsumption", RequiredLevel = new AttributeRequiredLevelManagedProperty(AttributeRequiredLevel.None), MaxLength = 1000, FormatName = StringFormatName.Json, // JSON-formatted string column DisplayName = new Label("Energy Consumption", 1033), Description = new Label("Contains information about energy consumed by the IoT devices", 1033) }, SolutionUniqueName = "examplesolution" }; var response = (CreateAttributeResponse)service.Execute(request); return response.AttributeId;}
For very large JSON payloads, use MemoAttributeMetadata with JSON format instead of StringAttributeMetadata.
The partitionid Column: Important to Understand
If there is one thing to understand deeply about elastic tables, it is partitionid. Getting this right determines how well your table scales and performs.
Cosmos DB splits data into logical partitions based on the partitionid value. All rows that share the same partitionid value belong to the same logical partition. Each partition can hold up to 20 GB of data.
For queries, performance is best when you scope them to a specific partition. A query that crosses all partitions is essentially a fan-out scan, which is much slower.
How to Choose a Good Partition Key
Your partitionid value should meet these criteria:
- High cardinality. It should have a wide range of possible values so data distributes evenly across many partitions.
- Query-aligned. Most of your queries should naturally filter by this value. If you almost always query in the context of a customer ID, device ID, or tenant ID, that is a good partition key.
- Immutable. Once you create a row with a
partitionidvalue, you cannot change it. Ever. This is not a warning you can work around later. - Even distribution. Avoid values that cluster data heavily in a small number of partitions, which causes hot partitions.
- Under 1,024 bytes, and no slashes, angle brackets, asterisks, percent signs, ampersands, colons, backslashes, question marks, or plus signs.
If you do not set a partitionid value when creating a row, it remains null permanently and you cannot change it later. In that case, Dataverse uses the primary key as the effective partition value, which is fine for tables where you mostly retrieve records by primary key.
The Built-In Alternate Key
Every elastic table is automatically created with one alternate key named KeyForNoSqlEntityWithPKPartitionId. This key combines the table’s primary key with the partitionid column. You will use this alternate key constantly in your code whenever you need to reference a specific record that has a non-null partitionid value.
You cannot create additional custom alternate keys on elastic tables.
CRUD Operations in Code
Let me walk through each operation with code examples. I will use the contoso_SensorData table from the Microsoft documentation scenario, where a device ID is used as the partition key.
Create
When creating a record, set partitionid directly in the entity attributes. Also set ttlinseconds if you want the record to auto-expire. The response contains the session token you will need if you want to immediately read what you just wrote.
public static Guid CreateSensorData( IOrganizationService service, string deviceId, ref string sessionToken){ var entity = new Entity("contoso_sensordata") { Attributes = { { "contoso_deviceid", deviceId }, { "contoso_sensortype", "Humidity" }, { "contoso_value", 40 }, { "contoso_timestamp", DateTime.UtcNow }, { "partitionid", deviceId }, // Set partition key at creation { "ttlinseconds", 86400 } // Auto-delete after 24 hours } }; var request = new CreateRequest { Target = entity }; var response = (CreateResponse)service.Execute(request); // Capture session token for use in subsequent reads sessionToken = response.Results["x-ms-session-token"].ToString(); return response.id;}
One important note: deep insert is not supported with elastic tables. If you need to create related records, create each one independently with its own Create call.
Retrieve
When retrieving a record that has a non-null partitionid, you must include the partitionid value to uniquely identify it. Use the KeyForNoSqlEntityWithPKPartitionId alternate key through a KeyAttributeCollection:
public static void RetrieveSensorData( IOrganizationService service, Guid sensorDataId, string deviceId, string sessionToken){ var keys = new KeyAttributeCollection { { "contoso_sensordataid", sensorDataId }, { "partitionid", deviceId } }; var request = new RetrieveRequest { Target = new EntityReference("contoso_sensordata", keys), ColumnSet = new ColumnSet("contoso_value"), ["SessionToken"] = sessionToken // Pass session token for read-your-writes consistency }; var response = (RetrieveResponse)service.Execute(request); Console.WriteLine($"Value: {response.Entity.GetAttributeValue<int>("contoso_value")}");}
Update
Updates also use the KeyAttributeCollection to identify the record. Include the session token and capture the new one from the response:
public static void UpdateSensorData( IOrganizationService service, Guid sensorDataId, string deviceId, ref string sessionToken){ var keys = new KeyAttributeCollection { { "contoso_sensordataid", sensorDataId }, { "partitionid", deviceId } }; var entity = new Entity("contoso_sensordata", keys) { Attributes = { { "contoso_value", 60 }, { "contoso_timestamp", DateTime.UtcNow } } }; var request = new UpdateRequest { Target = entity, ["SessionToken"] = sessionToken }; var response = (UpdateResponse)service.Execute(request); sessionToken = response.Results["x-ms-session-token"].ToString();}
Delete
For delete, pass the partitionId optional parameter directly:
public static void DeleteSensorData( IOrganizationService service, Guid sensorDataId, string deviceId){ var request = new DeleteRequest { Target = new EntityReference("contoso_sensordata", sensorDataId), ["partitionId"] = deviceId }; service.Execute(request); // Note: delete does not currently return x-ms-session-token}
Query
When querying, use the partitionId optional parameter on RetrieveMultipleRequest to scope the query to a specific partition. This is significantly faster than a full table scan across all partitions:
public static EntityCollection QueryByPartition( IOrganizationService service, string deviceId){ // Scoping to a single partition gives the best query performance var request = new RetrieveMultipleRequest { Query = new QueryExpression("contoso_sensordata") { ColumnSet = new ColumnSet("contoso_value", "contoso_timestamp") }, ["partitionId"] = deviceId // Note: capital I in partitionId here }; var response = (RetrieveMultipleResponse)service.Execute(request); return response.EntityCollection;}
Note the casing: when used as an optional parameter on RetrieveMultipleRequest, the parameter name is partitionId with a capital I, not partitionid in all lowercase.
Also worth knowing: link-entity queries are not supported on elastic tables. If you try to include a link entity in your query, you get error code 0x80048d0b with the message “Link entities are not supported”. You can retrieve related rows on a single record retrieve, but not in a multi-record query.
Upsert
Elastic table upsert behavior is different from standard tables. An upsert on an elastic table replaces the entire record, not just the fields you provide. Make sure your upsert payload includes all the fields you want the final record to have:
public static bool UpsertSensorData( IOrganizationService service, Guid id, ref string sessionToken){ // Elastic upsert replaces the full record, not a partial update var entity = new Entity("contoso_sensordata", id) { Attributes = { { "contoso_deviceid", "deviceid-001" }, { "contoso_sensortype", "Humidity" }, { "contoso_value", 60 }, { "contoso_timestamp", DateTime.UtcNow }, { "partitionid", "deviceid-001" }, { "ttlinseconds", 86400 } } }; var request = new UpsertRequest { Target = entity, ["SessionToken"] = sessionToken }; var response = (UpsertResponse)service.Execute(request); sessionToken = response.Results["x-ms-session-token"].ToString(); return response.RecordCreated;}
Session Tokens and Consistency
Elastic tables use eventual consistency across different logical sessions. This means that if you write a record and immediately try to read it from a different context, the read might return stale data. For many scenarios this is acceptable. For scenarios where you need to read your own writes immediately, you need session tokens.
Every write operation returns an x-ms-session-token value in the response. Capture it and pass it on subsequent reads using the SessionToken optional parameter:
// After a write, capture the tokenstring sessionToken = response.Results["x-ms-session-token"].ToString();// Pass it on the next read to guarantee read-your-writes consistencyvar request = new RetrieveRequest{ Target = new EntityReference("contoso_sensordata", keys), ColumnSet = new ColumnSet("contoso_value"), ["SessionToken"] = sessionToken};
If you do not pass the session token, the read might not reflect your most recent write. For user-facing scenarios where someone creates or updates a record and is immediately shown the updated view, always propagate the session token.
Note that delete operations do not currently return an x-ms-session-token value. This is a known issue.
Bulk Operations
For high-volume data ingestion, use CreateMultiple, UpdateMultiple, and DeleteMultiple. These are optimized for elastic tables and give you much higher throughput than individual operations in a loop:
public static Guid[] BulkCreateSensorData(IOrganizationService service){ string tableLogicalName = "contoso_sensordata"; List<Entity> entityList = [ new(tableLogicalName) { Attributes = { { "contoso_deviceid", "deviceid-001" }, { "contoso_sensortype", "Humidity" }, { "contoso_value", 40 }, { "contoso_timestamp", new DateTime(2026, 4, 17, 5, 0, 0, DateTimeKind.Utc) }, { "partitionid", "deviceid-001" }, { "ttlinseconds", 86400 } } }, new(tableLogicalName) { Attributes = { { "contoso_deviceid", "deviceid-002" }, { "contoso_sensortype", "Pressure" }, { "contoso_value", 20 }, { "contoso_timestamp", new DateTime(2026, 4, 17, 7, 20, 0, DateTimeKind.Utc) }, { "partitionid", "deviceid-002" }, { "ttlinseconds", 86400 } } } ]; EntityCollection entities = new(entityList) { EntityName = tableLogicalName }; var request = new CreateMultipleRequest { Targets = entities }; var response = (CreateMultipleResponse)service.Execute(request); return response.Ids;}
The same pattern applies to UpdateMultipleRequest and DeleteMultipleRequest. When you need to delete multiple records, use KeyAttributeCollection on each EntityReference to include the partitionid:
List<EntityReference> entityReferences = new(){ new EntityReference(logicalName: "contoso_sensordata", keyAttributeCollection: new KeyAttributeCollection { { "contoso_sensordataid", new Guid("3f56361a-b210-4a74-8708-3c664038fa41") }, { "partitionid", "deviceid-001" } }), new EntityReference(logicalName: "contoso_sensordata", keyAttributeCollection: new KeyAttributeCollection { { "contoso_sensordataid", new Guid("e682715b-1bba-415e-b2bc-de9327308423") }, { "partitionid", "deviceid-002" } })};var request = new DeleteMultipleRequest{ Targets = new EntityReferenceCollection(entityReferences)};service.Execute(request);
Plugin Behavior: The Critical Difference
This is the section I see cause the most confusion when developers first work with elastic tables. Elastic tables do not support multi-record transactions, and plugin rollback behavior is fundamentally different from standard tables.
With standard tables, if a plugin throws InvalidPluginExecutionException in a synchronous PreOperation or PostOperation stage, the entire operation rolls back. The record is not saved.
With elastic tables, this is not guaranteed in PreOperation or PostOperation. If the Main operation already succeeded and your plugin throws in PostOperation, the data operation has already completed. An error is returned to the caller, but the record is saved.
The only stage where you can reliably block an elastic table operation is PreValidation. Validation in PreValidation works as expected: the error is returned and the data operation does not begin.
In practice, this means:
- Always put your validation logic in a
PreValidationplugin step, notPreOperationorPostOperation - Never assume that throwing in
PostOperationwill roll back an elastic table write ExecuteTransactionRequestand Web API$batchchangeset operations do not apply transactions to elastic tables, even though they currently do not throw an error. This is a known issue that will produce errors in future platform updates
Relationship Limitations
Elastic tables support one-to-many relationships, but with restrictions:
- Many-to-many relationships are not supported
- Cascading behavior is not supported. When you create a relationship, the cascade must be set to
Cascade.None - You cannot return related rows in a multi-record query. Related rows are only available when you retrieve a single record
When you create a many-to-one relationship where the elastic table is the referenced table, Dataverse adds an extra string column to the referencing table with the naming convention <lookup name>pid. This column stores the partitionid value of the related elastic table record. When you set the lookup using an alternate key reference, this column is populated automatically.
Best Practices and What to Avoid
Based on what I have seen working with elastic tables, here are the practices that matter most.
Do these:
- Decide on your partition key before you create any rows. You cannot change
partitionidafter a row is created. This is a schema-level decision, not something you can fix later. - Use a partition key that maps to how you query. If every query starts with “for device X” or “for customer Y”, that is your partition key.
- Set
ttlinsecondsfor any data that has a natural expiry. Let the platform clean it up automatically rather than managing deletions yourself. - Use
CreateMultipleandUpdateMultiplefor bulk ingestion scenarios. The throughput difference versus individual calls is significant. - Scope queries to a partition using the
partitionIdoptional parameter. Cross-partition queries are slower and should be the exception, not the rule. - Put all validation in
PreValidationplugin steps.
Avoid these:
- Do not use elastic tables for data that requires complex joins or relational integrity across tables. Standard tables exist for that reason.
- Do not assume plugin
PostOperationfailures roll back the operation. They do not. - Do not use generic or low-cardinality values as partition keys. Using a column like `type` or `status` produces just a handful of partitions regardless of how many rows you have, which defeats the purpose of horizontal partitioning entirely.
- Do not use generic or short strings as partition keys. Searching for
accountin a standard query against elastic table clientdata, for example, will match far more than you intend. - Do not use
ExecuteTransactionRequestexpecting atomicity. It currently succeeds but is not truly atomic, and will throw errors in a future update. - Do not forget about the
<lookup>pidcolumn on referencing tables. If you are querying a standard table and need to retrieve the related elastic table record, you need that column to get thepartitionid.
Wrapping Up
Elastic tables are a genuinely useful addition to the Dataverse platform, but they require a different mental model than standard tables. The Cosmos DB foundation gives you horizontal scale and TTL that standard tables cannot offer, but it removes the transactional guarantees and relational query capabilities that you rely on for core CRM data.
The key things to take away are: decide your partition key before any rows are created, always set it consistently, scope your queries to a partition, keep validation logic in PreValidation plugin steps, and do not expect plugin rollback behavior to work the same as it does on standard tables.
Leave a comment