Generates a tf.data.Dataset from text files in a directory.
Source: R/dataset-utils.R
text_dataset_from_directory.RdIf your directory structure is:
main_directory/
...class_a/
......a_text_1.txt
......a_text_2.txt
...class_b/
......b_text_1.txt
......b_text_2.txtThen calling text_dataset_from_directory(main_directory, labels='inferred') will return a tf.data.Dataset that yields batches of
texts from the subdirectories class_a and class_b, together with labels
0 and 1 (0 corresponding to class_a and 1 corresponding to class_b).
Only .txt files are supported at this time.
Usage
text_dataset_from_directory(
directory,
labels = "inferred",
label_mode = "int",
class_names = NULL,
batch_size = 32L,
max_length = NULL,
shuffle = TRUE,
seed = NULL,
validation_split = NULL,
subset = NULL,
follow_links = FALSE,
verbose = TRUE
)Arguments
- directory
Directory where the data is located. If
labelsis"inferred", it should contain subdirectories, each containing text files for a class. Otherwise, the directory structure is ignored.- labels
Either
"inferred"(labels are generated from the directory structure),NULL(no labels), or a list/tuple of integer labels of the same size as the number of text files found in the directory. Labels should be sorted according to the alphanumeric order of the text file paths (obtained viaos.walk(directory)in Python).- label_mode
String describing the encoding of
labels. Options are:"int": means that the labels are encoded as integers (e.g. forsparse_categorical_crossentropyloss)."categorical"means that the labels are encoded as a categorical vector (e.g. forcategorical_crossentropyloss)."binary"means that the labels (there can be only 2) are encoded asfloat32scalars with values 0 or 1 (e.g. forbinary_crossentropy).NULL(no labels).
- class_names
Only valid if
"labels"is"inferred". This is the explicit list of class names (must match names of subdirectories). Used to control the order of the classes (otherwise alphanumerical order is used).- batch_size
Size of the batches of data. If
NULL, the data will not be batched (the dataset will yield individual samples). Defaults to32.- max_length
Maximum size of a text string. Texts longer than this will be truncated to
max_length.- shuffle
Whether to shuffle the data. If set to
FALSE, sorts the data in alphanumeric order. Defaults toTRUE.- seed
Optional random seed for shuffling and transformations.
- validation_split
Optional float between 0 and 1, fraction of data to reserve for validation.
- subset
Subset of the data to return. One of
"training","validation"or"both". Only used ifvalidation_splitis set. Whensubset="both", the utility returns a tuple of two datasets (the training and validation datasets respectively).- follow_links
Whether to visits subdirectories pointed to by symlinks. Defaults to
FALSE.- verbose
Whether to display number information on classes and number of files found. Defaults to
TRUE.
Value
A tf.data.Dataset object.
If
label_modeisNULL, it yieldsstringtensors of shape(batch_size,), containing the contents of a batch of text files.Otherwise, it yields a tuple
(texts, labels), wheretextshas shape(batch_size,)andlabelsfollows the format described below.
Rules regarding labels format:
if
label_modeisint, the labels are anint32tensor of shape(batch_size,).if
label_modeisbinary, the labels are afloat32tensor of 1s and 0s of shape(batch_size, 1).if
label_modeiscategorical, the labels are afloat32tensor of shape(batch_size, num_classes), representing a one-hot encoding of the class index.
See also
Other dataset utils: audio_dataset_from_directory() image_dataset_from_directory() split_dataset() timeseries_dataset_from_array()
Other utils: audio_dataset_from_directory() clear_session() config_disable_interactive_logging() config_disable_traceback_filtering() config_enable_interactive_logging() config_enable_traceback_filtering() config_is_interactive_logging_enabled() config_is_traceback_filtering_enabled() get_file() get_source_inputs() image_array_save() image_dataset_from_directory() image_from_array() image_load() image_smart_resize() image_to_array() layer_feature_space() normalize() pad_sequences() set_random_seed() split_dataset() timeseries_dataset_from_array() to_categorical() zip_lists()
Other preprocessing: image_dataset_from_directory() image_smart_resize() timeseries_dataset_from_array()