package core:encoding/csv



    package csv reads and writes comma-separated values (CSV) files. This package supports the format described in RFC 4180 <>


    Error ¶

    Error :: union {
    Related Procedures With Parameters
    Related Procedures With Returns

    Reader ¶

    Reader :: struct {
    	// comma is the field delimiter
    	// reader_init will set it to be ','
    	// A "comma" must be a valid rune, nor can it be \r, \n, or the Unicode replacement character (0xfffd)
    	comma:               rune,
    	// comment, if not 0, is the comment character
    	// Lines beginning with the comment character without a preceding whitespace are ignored
    	comment:             rune,
    	// fields_per_record is the number of expected fields per record
    	//         if fields_per_record is >0, 'read' requires each record to have that field count
    	//         if fields_per_record is  0, 'read' sets it to the field count in the first record
    	//         if fields_per_record is <0, no check is made and records may have a variable field count
    	fields_per_record:   int,
    	// If trim_leading_space is true, leading whitespace in a field is ignored
    	// This is done even if the field delimiter (comma), is whitespace
    	trim_leading_space:  bool,
    	// If lazy_quotes is true, a quote may appear in an unquoted field and a non-doubled quote may appear in a quoted field
    	lazy_quotes:         bool,
    	// multiline_fields, when set to true, will treat a field starting with a " as a multiline string
    	// therefore, instead of reading until the next \n, it'll read until the next "
    	multiline_fields:    bool,
    	// reuse_record controls whether calls to 'read' may return a slice using the backing buffer
    	// for performance
    	// By default, each call to 'read' returns a newly allocated slice
    	reuse_record:        bool,
    	// reuse_record_buffer controls whether calls to 'read' clone the strings of each field or uses
    	// the data stored in record buffer for performance
    	// By default, each call to 'read' clones the strings of each field
    	reuse_record_buffer: bool,
    	// internal buffers
    	r:                   bufio.Reader,
    	line_count:          int,
    	// current line being read in the CSV file
    	raw_buffer:          [dynamic]u8,
    	record_buffer:       [dynamic]u8,
    	field_indices:       [dynamic]int,
    	last_record:         [dynamic]string,
    	sr:                  strings.Reader,

    Reader is a data structure used for reading records from a CSV-encoded file

    The associated procedures for Reader expects its input to conform to RFC 4180.

    Related Procedures With Parameters

    Reader_Error ¶

    Reader_Error :: struct {
    	kind:       Reader_Error_Kind,
    	start_line: int,
    	line:       int,
    	column:     int,
    	expected:   int,
    	got:        int,

    Reader_Error_Kind ¶

    Reader_Error_Kind :: enum int {

    Writer ¶

    Writer :: struct {
    	// Field delimiter (set to ',' with writer_init)
    	comma:    rune,
    	// if set to true, \r\n will be used as the line terminator
    	use_crlf: bool,
    	w:        io.Stream,

    Writer is a data structure used for writing records using a CSV-encoding.

    Related Procedures With Parameters






    is_io_error ¶

    is_io_error :: proc(err: Error, io_err: io.Error) -> bool {…}

    is_io_error checks where an Error is a specific io.Error kind

    read ¶

    read :: proc(r: ^Reader, allocator := context.allocator) -> (record: []string, err: Error) {…}

    read reads a single record (a slice of fields) from r

    All \r\n sequences are normalized to \n, including multi-line field

    read_all ¶

    read_all :: proc(r: ^Reader, allocator := context.allocator) -> ([][]string, Error) {…}

    read_all reads all the remaining records from r. Each record is a slice of fields. read_all is defined to read until an EOF, and does not treat, and does not treat EOF as an error

    read_all_from_string ¶

    read_all_from_string :: proc(input: string, records_allocator := context.allocator, buffer_allocator := context.allocator) -> ([][]string, Error) {…}

    read_all reads all the remaining records from the provided input.

    read_from_string ¶

    read_from_string :: proc(input: string, record_allocator := context.allocator, buffer_allocator := context.allocator) -> (record: []string, n: int, err: Error) {…}

    read reads a single record (a slice of fields) from the provided input.

    reader_destroy ¶

    reader_destroy :: proc(r: ^Reader) {…}

    reader_destroy destroys a Reader

    reader_init ¶

    reader_init :: proc(reader: ^Reader, r: io.Stream, buffer_allocator := context.allocator) {…}

    reader_init initializes a new Reader from r

    reader_init_with_string ¶

    reader_init_with_string :: proc(reader: ^Reader, s: string, buffer_allocator := context.allocator) {…}

    reader_init_with_string initializes a new Reader from s

    write ¶

    write :: proc(w: ^Writer, record: []string) -> io.Error {…}

    write writes a single CSV records to w with any of the necessarily quoting. A record is a slice of strings, where each string is a single field.

    If the underlying io.Writer requires flushing, make sure to call io.flush

    write_all ¶

    write_all :: proc(w: ^Writer, records: [][]string) -> io.Error {…}

    write_all writes multiple CSV records to w using write, and then flushes (if necessary).

    writer_flush ¶

    writer_flush :: proc(w: ^Writer) -> io.Error {…}

    writer_flush flushes the underlying io.Writer. If the underlying io.Writer does not support flush, nil is returned.

    writer_init ¶

    writer_init :: proc(writer: ^Writer, w: io.Stream) {…}

    writer_init initializes a Writer that writes to w

    Procedure Groups

    This section is empty.

    Source Files

    Generation Information

    Generated with odin version dev-2024-04 (vendor "odin") Windows_amd64 @ 2024-04-26 21:08:58.369877200 +0000 UTC