Schema migrations in Sanity are inevitable. As your content model evolves, you need reliable ways to move existing data to match the new shape. Sanity provides defineMigration from sanity/migrate — a built-in migration framework with dry-run support, document type filtering, and a CLI runner. You write migration files that are explicit, testable, and repeatable.
This guide covers the most common patterns you’ll encounter.
Renaming a Field
The simplest migration: a field name changes, but the data type stays the same. Maybe you started with title and need it to be heading, or body needs to become content.
Don’t just move the data — follow the deprecation lifecycle to avoid breaking editors or your frontend during the transition.
Phase 1: Deprecate the Old Field
Before running any migration, update your schema to mark the old field as deprecated. This prevents editors from adding new data to a field you’re about to remove.
// In your schema definition
defineField({
name: "title",
title: "Title (Deprecated)",
type: "string",
deprecated: {
reason: 'Use the new "heading" field instead. Will be removed after migration.',
},
readOnly: true,
hidden: ({ value }) => value === undefined,
})
Deploy this schema change and update your frontend queries to use coalesce() so both old and new field values work during the transition:
*[_type == "article"]{
"heading": coalesce(heading, title),
}
Phase 2: Migrate the Data
Create a migration file using defineMigration:
// migrations/rename-title-to-heading/index.ts
import { defineMigration, at, setIfMissing, unset } from "sanity/migrate";
export default defineMigration({
title: "Rename title to heading",
documentTypes: ["article"],
filter: "defined(title) && !defined(heading)",
migrate: {
document(doc) {
return [
at("heading", setIfMissing(doc.title)),
at("title", unset()),
];
},
},
});
Run it with the CLI:
# Dry run first (default behavior)
sanity migration run rename-title-to-heading
# Execute when ready
sanity migration run rename-title-to-heading --no-dry-run
Phase 3: Clean Up
Once you’ve verified that title is undefined across all documents, remove the deprecated field from your schema and drop the coalesce() fallback from your frontend queries.
Tip
The defineMigration filter makes every migration idempotent — running it twice won’t double-apply changes. The defined(title) && !defined(heading) filter ensures only unmigrated documents are touched.
Reshaping a Document Type
Sometimes a flat document structure needs to become nested. For example, moving streetAddress, city, and zipCode into an address object.
// migrations/reshape-office-address/index.ts
import { defineMigration, at, set, unset } from "sanity/migrate";
export default defineMigration({
title: "Reshape office address fields into nested object",
documentTypes: ["office"],
filter: "defined(streetAddress) && !defined(address)",
migrate: {
document(doc) {
return [
at("address", set({
_type: "address",
street: doc.streetAddress,
city: doc.city,
zipCode: doc.zipCode,
})),
at("streetAddress", unset()),
at("city", unset()),
at("zipCode", unset()),
];
},
},
});
The key detail: the nested object needs a _type field matching your schema definition. Forgetting this is the most common mistake in reshape migrations.
Splitting a Document Type
When a single type grows too broad, you may need to split it. For example, splitting page into landingPage and articlePage. This is a two-step migration: first create the new documents, then fix all references pointing to the old ones.
Step 1: Create New Documents
// migrations/split-page-to-article/index.ts
import { defineMigration, create, del } from "sanity/migrate";
export default defineMigration({
title: "Split article pages into articlePage type",
documentTypes: ["page"],
filter: 'pageType == "article"',
migrate: {
document(doc) {
const { _id, _rev, pageType, ...fields } = doc;
return [
create({
...fields,
_id: `articlePage-${_id}`,
_type: "articlePage",
}),
del(doc),
];
},
},
});
Step 2: Fix Broken References
Splitting documents changes their _id, which breaks any references pointing to the old document. Run a follow-up migration to update them:
// migrations/fix-article-page-references/index.ts
import { createClient } from "@sanity/client";
const client = createClient({
projectId: "your-project-id",
dataset: "production",
apiVersion: "2024-01-01",
token: process.env.SANITY_TOKEN,
useCdn: false,
});
// Find all old page IDs that were split
const oldIds = await client.fetch(
`*[_type == "articlePage"]._id`
);
for (const newId of oldIds) {
const oldId = newId.replace("articlePage-", "");
// Find every document referencing the old ID
const referencing = await client.fetch(
`*[references($oldId)]{ _id, _type }`,
{ oldId }
);
if (referencing.length === 0) continue;
const transaction = client.transaction();
for (const doc of referencing) {
// Patch each reference field — adjust field names to match your schema
transaction.patch(doc._id, {
set: { "pageRef._ref": newId },
});
}
await transaction.commit();
}
Warning
The reference-fix step depends on your schema. If multiple fields can reference pages, you’ll need to patch each one. Query with references() to find affected documents, then inspect their structure to determine which fields need updating.
Migrating Portable Text
Portable Text migrations are the trickiest because the structure is deeply nested. Common scenarios include renaming a custom block type, adding a new field to inline objects, or restructuring marks.
function migratePortableText(blocks) {
return blocks.map((block) => {
// Rename a custom block type
if (block._type === "codeBlock") {
return { ...block, _type: "codeSnippet" };
}
// Modify children (inline content)
if (block._type === "block" && block.children) {
return {
...block,
children: block.children.map((child) => {
if (child._type === "authorRef") {
return { ...child, _type: "contributorRef" };
}
return child;
}),
};
}
return block;
});
}
Note
Always write Portable Text transforms as pure functions that take blocks and return blocks. This makes them easy to unit test before running against your dataset.
Running Migrations Safely
Regardless of the pattern, every migration should follow the same workflow:
- Query first — run your GROQ query in Vision to verify you’re targeting the right documents
- Dry run —
defineMigrationruns in dry-run mode by default, so usesanity migration runwithout--no-dry-runto preview changes - Run against a cloned dataset — use
sanity dataset copyto create a test copy - Validate after — query the migrated documents to confirm the new shape
- Run in production — only after validation passes
# Clone your dataset for testing
npx sanity dataset copy production staging
# Dry run (default)
npx sanity migration run my-migration --dataset staging
# Execute after review
npx sanity migration run my-migration --dataset staging --no-dry-run
Batching Large Datasets
If you’re using raw @sanity/client transactions (for example, in the reference-fix step of a document split), don’t accumulate every patch into a single transaction. Large transactions will fail or run out of memory. Batch them:
const BATCH_SIZE = 100;
for (let i = 0; i < documents.length; i += BATCH_SIZE) {
const batch = documents.slice(i, i + BATCH_SIZE);
const transaction = client.transaction();
for (const doc of batch) {
transaction.patch(doc._id, {
set: { heading: doc.title },
unset: ["title"],
});
}
await transaction.commit();
console.log(`Batch ${Math.floor(i / BATCH_SIZE) + 1} committed`);
}
defineMigration handles batching internally, which is another reason to prefer it for straightforward migrations.
This workflow catches issues before they hit production. Migrations are easy to write and hard to undo — take the extra five minutes to validate.