beginners try to run a GitHub project ( https://github.com/david-gpu/.)
find the dataset Large-scale CelebFaces Attributes (CelebA) Dataset ( http://mmlab.ie.cuhk.edu.hk/p.). Among them, there are 200000 pieces of data from Align&Cropped Images.
but every training session ends quickly, with a batch of only 230280300 each time.
what can I do to train all the 200000 datasets?
I sincerely hope that all the great gods will solve their doubts.