Early detection and diagnosis of breast cancer is the key to controlling, curing this disease as well as reducing costs for the patient. AI-based computer-aided diagnosis (CAD) systems designed to help physicians make faster and more accurate decisions, convolutional neural network (CNN) has earned many achievements in the medical field. However, the performance of the CNN model is highly dependent on the quantity and quality of the input data sets, which is a big challenge for medical imaging because the image collection is very limited. To solve this problem, Deep Convolutional Generative Adversarial Network (DCGAN) is proposed to generate breast tumor images from the original breast ultrasound image BUSI dataset consisting of 437 benign masses and 210 malignant tumors. Then, in order to determine the performance of the proposed model, the newly created images combined with the images in the original data will be evaluated qualitatively, visually and quantitatively through the classification problem by feeding to 2 classification models: simple self-designed and Densenet to evaluate their quality and clinical value. The classification performance using only the original data yielded a sensitivity of 37% and an F1-score of 51% then increased to a sensitivity of 81% and an F1-score of 70%. The results show that breast tumor images generated from the DCGAN network can be used to significantly increase the efficiency thus has the potential to assist physicians in medical image reading task.