BulkCopy - lobodava/artisan-orm GitHub Wiki
For inserting many rows in one shot, looping Save row-by-row is slow — every row is one round trip. SQL Server has a dedicated bulk-copy mechanism (SqlBulkCopy) that streams rows over a single TDS frame; Artisan.Orm wraps it in two extension methods plus matching RepositoryBase wrappers.
Added in v4 — see What's New in v4.
| Method | Mapping path | When to use |
|---|---|---|
BulkCopy<T> |
Registered [MapperFor] mapper (CreateDataTable / CreateDataRow). |
You already have a mapper for the type. |
BulkCopyAs<T> |
Auto-mapping — properties of T are reflected into a DataTable once and cached. |
Quick imports of DTOs without a registered mapper. |
Both are available in sync and async forms (the async takes a CancellationToken) and both exist as extension methods on SqlConnection and as instance methods on RepositoryBase.
public class RecordRepository : RepositoryBase
{
public RecordRepository(string connStr) : base(connStr) { }
public int ImportRecords(IList<Record> rows) =>
BulkCopy(rows, "dbo.Records",
options: SqlBulkCopyOptions.TableLock,
batchSize: 5_000,
timeout: 60);
public Task<int> ImportRecordsAsync(IList<Record> rows, CancellationToken ct) =>
BulkCopyAsync(rows, "dbo.Records", ct,
options: SqlBulkCopyOptions.TableLock,
batchSize: 5_000,
timeout: 60);
}The repository's current Transaction (if one is in flight) is automatically threaded through to the underlying SqlBulkCopy — see Inside a transaction below.
using var conn = new SqlConnection(connectionString);
int written = conn.BulkCopy<Record>(rows, "dbo.Records",
options: SqlBulkCopyOptions.TableLock,
batchSize: 5_000);
// Connection is opened on entry if it was closed, and closed back on exit.The most useful flags:
-
TableLock— request a table-level lock for the duration of the copy. Significantly faster on empty / staging tables, blocks readers. -
KeepIdentity— preserve theIdvalues from the source rows (default behaviour is to let SQL Server allocate new identities). Combine withIDENTITY_INSERTsemantics on the target table. -
KeepNulls— writeNULLinstead of column defaults for null source values. -
CheckConstraints— validateCHECKconstraints during the copy (off by default — bulk copy is meant for trusted data). -
FireTriggers— run triggers on the target table (off by default).
Combine with |:
conn.BulkCopy<Record>(rows, "dbo.Records",
options: SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.KeepIdentity);batchSize: 0 (the default) means "send all rows in one batch". For very large data sets, set a positive batch size — typical sweet spot is 1,000 to 10,000 — to:
- avoid filling the transaction log in one go,
- let SQL Server release locks between batches,
- see partial progress if the operation is canceled.
There is no universal optimal value; benchmark for your row width and indexes.
Both repository wrappers automatically pick up the repository's current Transaction, so the bulk insert participates in the surrounding BeginTransaction / RunInTransaction block:
RunInTransaction(tran =>
{
BulkCopy(headerRows, "dbo.Headers");
BulkCopy(detailRows, "dbo.Details");
// Both bulk inserts roll back together if anything throws here.
});For the standalone SqlConnection extension methods, pass the transaction as a parameter:
using var conn = new SqlConnection(connStr);
conn.Open();
using var tran = conn.BeginTransaction();
try
{
conn.BulkCopy<Header>(headers, "dbo.Headers", tran);
conn.BulkCopy<Detail>(details, "dbo.Details", tran);
tran.Commit();
}
catch
{
tran.Rollback();
throw;
}When T has no [MapperFor] attribute, BulkCopyAs reads its public, settable, simple-typed properties via reflection and builds a DataTable once (the metadata is cached for subsequent calls):
public record StagingRecord(int Id, string Name, decimal Amount);
using var conn = new SqlConnection(connStr);
int written = conn.BulkCopyAs<StagingRecord>(rows, "dbo.RecordsStaging");Property names must match the destination column names. For non-trivial transformations, use BulkCopy<T> with a [MapperFor] instead — its CreateDataRow is the natural place to compute / project values.
In both flavours, source-to-target columns are matched by name, not by position. The order of columns in your CreateDataTable does not have to match the SQL table's column order.
All four methods return the number of rows written:
int rowsWritten = repo.BulkCopy(rows, "dbo.Records");Bulk copy is the right tool for a straight INSERT of many rows. If you also need:
-
UPDATE/DELETEsemantics (MERGE), - enforcement of business invariants computed in T-SQL,
- foreign-key resolution via translation tables (see Negative identities and object graph saving),
then a stored procedure with a table-valued parameter is more flexible. Bulk copy can outrun a TVP for very large inserts because it streams directly without buffering the whole batch into a temp table, but the TVP path is often easier to maintain.
See also: