package core:encoding/csv
Overview
package csv reads and writes comma-separated values (CSV) files. This package supports the format described in RFC 4180
Example:
package main
import "core:fmt"
import "core:encoding/csv"
import "core:os"
// Requires keeping the entire CSV file in memory at once
iterate_csv_from_string :: proc(filename: string) {
r: csv.Reader
r.trim_leading_space = true
r.reuse_record = true // Without it you have to delete(record)
r.reuse_record_buffer = true // Without it you have to each of the fields within it
defer csv.reader_destroy(&r)
csv_data, ok := os.read_entire_file(filename)
if ok {
csv.reader_init_with_string(&r, string(csv_data))
} else {
fmt.printfln("Unable to open file: %v", filename)
return
}
defer delete(csv_data)
for r, i, err in csv.iterator_next(&r) {
if err != nil { /* Do something with error */ }
for f, j in r {
fmt.printfln("Record %v, field %v: %q", i, j, f)
}
}
}
// Reads the CSV as it's processed (with a small buffer)
iterate_csv_from_stream :: proc(filename: string) {
fmt.printfln("Hellope from %v", filename)
r: csv.Reader
r.trim_leading_space = true
r.reuse_record = true // Without it you have to delete(record)
r.reuse_record_buffer = true // Without it you have to each of the fields within it
defer csv.reader_destroy(&r)
handle, err := os.open(filename)
if err != nil {
fmt.eprintfln("Error opening file: %v", filename)
return
}
defer os.close(handle)
csv.reader_init(&r, os.stream_from_handle(handle))
for r, i in csv.iterator_next(&r) {
for f, j in r {
fmt.printfln("Record %v, field %v: %q", i, j, f)
}
}
fmt.printfln("Error: %v", csv.iterator_last_error(r))
}
// Read all records at once
read_csv_from_string :: proc(filename: string) {
r: csv.Reader
r.trim_leading_space = true
r.reuse_record = true // Without it you have to delete(record)
r.reuse_record_buffer = true // Without it you have to each of the fields within it
defer csv.reader_destroy(&r)
csv_data, ok := os.read_entire_file(filename)
if ok {
csv.reader_init_with_string(&r, string(csv_data))
} else {
fmt.printfln("Unable to open file: %v", filename)
return
}
defer delete(csv_data)
records, err := csv.read_all(&r)
if err != nil { /* Do something with CSV parse error */ }
defer {
for rec in records {
delete(rec)
}
delete(records)
}
for r, i in records {
for f, j in r {
fmt.printfln("Record %v, field %v: %q", i, j, f)
}
}
}
package csv reads and writes comma-separated values (CSV) files. This package supports the format described in RFC 4180
Index
Types (5)
Constants (1)
Variables (1)
Procedure Groups (0)
This section is empty.
Types
Error ¶
Error :: union { Reader_Error, io.Error, }
Related Procedures With Parameters
Related Procedures With Returns
Reader ¶
Reader :: struct { // comma is the field delimiter // reader_init will set it to be ',' // A "comma" must be a valid rune, nor can it be \r, \n, or the Unicode replacement character (0xfffd) comma: rune, // comment, if not 0, is the comment character // Lines beginning with the comment character without a preceding whitespace are ignored comment: rune, // fields_per_record is the number of expected fields per record // if fields_per_record is >0, 'read' requires each record to have that field count // if fields_per_record is 0, 'read' sets it to the field count in the first record // if fields_per_record is <0, no check is made and records may have a variable field count fields_per_record: int, // If trim_leading_space is true, leading whitespace in a field is ignored // This is done even if the field delimiter (comma), is whitespace trim_leading_space: bool, // If lazy_quotes is true, a quote may appear in an unquoted field and a non-doubled quote may appear in a quoted field lazy_quotes: bool, // multiline_fields, when set to true, will treat a field starting with a " as a multiline string // therefore, instead of reading until the next \n, it'll read until the next " multiline_fields: bool, // reuse_record controls whether calls to 'read' may return a slice using the backing buffer // for performance // By default, each call to 'read' returns a newly allocated slice reuse_record: bool, // reuse_record_buffer controls whether calls to 'read' clone the strings of each field or uses // the data stored in record buffer for performance // By default, each call to 'read' clones the strings of each field reuse_record_buffer: bool, // internal buffers r: bufio.Reader, line_count: int, // current line being read in the CSV file raw_buffer: [dynamic]u8, record_buffer: [dynamic]u8, field_indices: [dynamic]int, last_record: [dynamic]string, sr: strings.Reader, // Set and used by the iterator. Query using `iterator_last_error` last_iterator_error: Error, }
Reader is a data structure used for reading records from a CSV-encoded file
The associated procedures for Reader expects its input to conform to RFC 4180.
Related Procedures With Parameters
Reader_Error_Kind ¶
Reader_Error_Kind :: enum int { Bare_Quote, Quote, Field_Count, Invalid_Delim, }
Writer ¶
Writer :: struct { // Field delimiter (set to ',' with writer_init) comma: rune, // if set to true, \r\n will be used as the line terminator use_crlf: bool, w: io.Stream, }
Writer is a data structure used for writing records using a CSV-encoding.
Related Procedures With Parameters
Constants
DEFAULT_RECORD_BUFFER_CAPACITY ¶
DEFAULT_RECORD_BUFFER_CAPACITY :: 256
Variables
reader_error_kind_string ¶
reader_error_kind_string: [Reader_Error_Kind]string = …
Procedures
is_io_error ¶
is_io_error checks where an Error is a specific io.Error kind
iterator_last_error ¶
Get last CSV parse error if we ignored it in the iterator loop
for record, row_idx in csv.iterator_next(&r) { ... }
iterator_next ¶
Returns a record at a time.
for record, row_idx in csv.iterator_next(&r) { ... }
TIP: If you process the results within the loop and don't need to own the results,
you can set the Reader's reuse_record
and reuse_record_reuse_record_buffer
to true;
you won't need to delete the record or its fields.
read ¶
read :: proc(r: ^Reader, allocator := context.allocator) -> (record: []string, err: Error) {…}
read reads a single record (a slice of fields) from r
All \r\n sequences are normalized to \n, including multi-line field
read_all ¶
read_all :: proc(r: ^Reader, allocator := context.allocator) -> ([][]string, Error) {…}
read_all reads all the remaining records from r. Each record is a slice of fields. read_all is defined to read until an EOF, and does not treat EOF as an error
read_all_from_string ¶
read_all_from_string :: proc(input: string, records_allocator := context.allocator, buffer_allocator := context.allocator) -> ([][]string, Error) {…}
read_all reads all the remaining records from the provided input.
read_from_string ¶
read_from_string :: proc(input: string, record_allocator := context.allocator, buffer_allocator := context.allocator) -> (record: []string, n: int, err: Error) {…}
read reads a single record (a slice of fields) from the provided input.
reader_init ¶
reader_init :: proc(reader: ^Reader, r: io.Stream, buffer_allocator := context.allocator) {…}
reader_init initializes a new Reader from r
reader_init_with_string ¶
reader_init_with_string :: proc(reader: ^Reader, s: string, buffer_allocator := context.allocator) {…}
reader_init_with_string initializes a new Reader from s
write ¶
write writes a single CSV records to w with any of the necessarily quoting. A record is a slice of strings, where each string is a single field.
If the underlying io.Writer requires flushing, make sure to call io.flush
write_all ¶
write_all writes multiple CSV records to w using write, and then flushes (if necessary).
writer_flush ¶
writer_flush flushes the underlying io.Writer. If the underlying io.Writer does not support flush, nil is returned.
writer_init ¶
writer_init initializes a Writer that writes to w
Procedure Groups
This section is empty.
Source Files
Generation Information
Generated with odin version dev-2024-11 (vendor "odin") Windows_amd64 @ 2024-11-20 21:11:50.551307600 +0000 UTC