Deep learning training set is 200000, why after the implementation of the training, batch only about 230to complete the training, how to complete the 200000 pictures of the training.

beginners try to run a GitHub project ( https://github.com/david-gpu/.)
find the dataset Large-scale CelebFaces Attributes (CelebA) Dataset ( http://mmlab.ie.cuhk.edu.hk/p.). Among them, there are 200000 pieces of data from Align&Cropped Images.
but every training session ends quickly, with a batch of only 230280300 each time.
what can I do to train all the 200000 datasets?
I sincerely hope that all the great gods will solve their doubts.


The

problem has been solved. In srez_main.py, change the
tf.app.flags.DEFINE_integer of the source code ('train_time', 20,

)
                        "Time in minutes to train the model")
Change the 20 in

to the time you want to train, and roughly calculate the training time required to complete it, in minutes.

MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1ea3654-1c51.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1ea3654-1c51.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?