Pytorch CNN Fruit Classification using ResNet18
Github : https://github.com/EthanSeok/Convolutional-Neural-Network
농업 딥러닝 논문 리뷰
개요
CNN
과 Faster R-CNN
에 대한 전반적인 공부와 실습
농업용 이미지 데이터를 활용하여 Neural-Network를 학습
농산물 품질 분류 및 생산량 확인
CNN
About ResNet
“ResNet은 더 깊은 (최대 152 layers) 네트워크에 대한 효율적인 학습을 위해 residual learning
프레임워크를 사용했다. ResNet은 더 깊은 네트워크에서 정보가 손실되지 않고 (vanishing gradient 없이) 흐를 수 있도록 ‘identity shortcut connection’을 특징으로 하는 residual blocks
을 사용했다.”
Bing Image Downloader 설치 1 pip install bing-image-downloader
사용 코드 1 2 from bing_image_downloader import downloaderdownloader.download(query_string, limit=100 , output_dir='dataset' , adult_filter_off=True , force_replace=False , timeout=60 , verbose=True )
query_string
: String to be searched.limit
: (optional, default is 100) Number of images to download.output_dir
: (optional, default is ‘dataset’) Name of output dir.adult_filter_off
: (optional, default is True) Enable of disable adult filteration.force_replace
: (optional, default is False) Delete folder if present and start a fresh download.timeout
: (optional, default is 60) timeout for connection in seconds.filter
: (optional, default is “”) filter, choose from [line, photo, clipart, gif, transparent]verbose
: (optional, default is True) Enable downloaded message.
다음 코드로 쉽게 원하는 이미지를 크롤링 할 수 있다.
CNN 학습 코드 Bing Image Downloader
이미지 다운로드 경로 지정
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 import osimport shutilfrom bing_image_downloader.bing_image_downloader import downloaderdirectory_list = [ './custom_dataset/train/' , './custom_dataset/test/' , ] for directory in directory_list: if not os.path.isdir(directory): os.makedirs(directory) def dataset_split (query, train_cnt ): for directory in directory_list: if not os.path.isdir(directory + '/' + query): os.makedirs(directory + '/' + query) cnt = 0 for file_name in os.listdir(query): if cnt < train_cnt: print (f'[Train Dataset] {file_name} ' ) shutil.move(query + '/' + file_name, './custom_dataset/train/' + query + '/' + file_name) else : print (f'[Test Dataset] {file_name} ' ) shutil.move(query + '/' + file_name, './custom_dataset/test/' + query + '/' + file_name) cnt += 1 shutil.rmtree(query)
Bing Image Downloader
크롤링 하고 싶은 이미지 query
지정. 자세한 내용은 위 Bing Image Downloader 사용법 참고
1 2 3 query = 'red tomato' downloader.download(query, limit=100 , output_dir='./' , adult_filter_off=True , force_replace=False , timeout=60 ) dataset_split(query, 80 )
1 2 3 query = 'green tomato' downloader.download(query, limit=100 , output_dir='./' , adult_filter_off=True , force_replace=False , timeout=60 ) dataset_split(query, 80 )
1 2 3 query = 'tomato blossom-end rot' downloader.download(query, limit=100 , output_dir='./' , adult_filter_off=True , force_replace=False , timeout=60 ) dataset_split(query, 80 )
torch
선언 및 cuda
or cpu
선택. # cuda는 local의 경우 GPU가 있을 경우에만 해당.
1 2 3 4 5 6 7 8 9 10 11 import torchimport torch.nn as nnimport torch.optim as optimimport torchvisionfrom torchvision import datasets, models, transformsimport numpy as npimport timedevice = torch.device("cuda:0" if torch.cuda.is_available() else "cpu" )
이미지 정규화
및 텐서화
+ train, test set 분리 + train hyper parameter 세팅
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 transforms_train = transforms.Compose([ transforms.Resize((224 , 224 )), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485 , 0.456 , 0.406 ], [0.229 , 0.224 , 0.225 ]) ]) transforms_test = transforms.Compose([ transforms.Resize((224 , 224 )), transforms.ToTensor(), transforms.Normalize([0.485 , 0.456 , 0.406 ], [0.229 , 0.224 , 0.225 ]) ]) data_dir = './custom_dataset' train_datasets = datasets.ImageFolder(os.path.join(data_dir, 'train' ), transforms_train) test_datasets = datasets.ImageFolder(os.path.join(data_dir, 'test' ), transforms_test) train_dataloader = torch.utils.data.DataLoader(train_datasets, batch_size=4 , shuffle=True , num_workers=4 ) test_dataloader = torch.utils.data.DataLoader(test_datasets, batch_size=4 , shuffle=True , num_workers=4 ) print ('size of train data set:' , len (train_datasets))print ('size of test data set:' , len (test_datasets))class_names = train_datasets.classes print ('class:' , class_names)
matplotlib
선언 + train sample image
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import matplotlibimport matplotlib.pyplot as pltimport matplotlib.image as mpimgimport matplotlib.font_manager as fmdef imshow (input , title ): input = input .numpy().transpose((1 , 2 , 0 )) mean = np.array([0.485 , 0.456 , 0.406 ]) std = np.array([0.229 , 0.224 , 0.225 ]) input = std * input + mean input = np.clip(input , 0 , 1 ) plt.imshow(input ) plt.title(title) plt.show() iterator = iter (train_dataloader) inputs, classes = next (iterator) out = torchvision.utils.make_grid(inputs) imshow(out, title=[class_names[x] for x in classes])
pretrained model
다운로드 + learning parameter
세팅
1 2 3 4 5 6 7 8 model = models.resnet18(pretrained=True ) num_features = model.fc.in_features model.fc = nn.Linear(num_features, 3 ) model = model.to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.001 , momentum=0.9 )
모델 학습코드 (train)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 num_epochs = 50 model.train() start_time = time.time() train_loss_list = [] train_acc_list = [] for epoch in range (num_epochs): running_loss = 0. running_corrects = 0 for inputs, labels in train_dataloader: inputs = inputs.to(device) labels = labels.to(device) optimizer.zero_grad() outputs = model(inputs) _, preds = torch.max (outputs, 1 ) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() * inputs.size(0 ) running_corrects += torch.sum (preds == labels.data) train_loss = running_loss / len (train_datasets) train_acc = running_corrects / len (train_datasets) * 100. train_loss_list.append(train_loss) train_acc_list.append(train_acc.cpu()) print ('#{} Loss: {:.4f} Acc: {:.4f}% Time: {:.4f}s' .format (epoch, train_loss, train_acc, time.time() - start_time))
모델 검증 (validation)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 model.eval () start_time = time.time() with torch.no_grad(): running_loss = 0. running_corrects = 0 for inputs, labels in test_dataloader: inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) _, preds = torch.max (outputs, 1 ) loss = criterion(outputs, labels) running_loss += loss.item() * inputs.size(0 ) running_corrects += torch.sum (preds == labels.data) print (f'[result: {class_names[preds[0 ]]} ] (answer: {class_names[labels.data[0 ]]} )' ) imshow(inputs.cpu().data[0 ], title='result: ' + class_names[preds[0 ]]) val_loss = running_loss / len (test_datasets) val_acc = running_corrects / len (test_datasets) * 100. print ('[Test Phase] Loss: {:.4f} Acc: {:.4f}% Time: {:.4f}s' .format (val_loss, val_acc, time.time() - start_time))
모델 학습 결과 시각화
1 2 3 4 5 6 7 8 9 10 11 12 13 %matplotlib inline epochs = range (num_epochs) plt.plot(epochs, train_acc_list, 'r' , "Training Accuracy" ) plt.title('Training Accuracy' ) plt.figure() plt.plot(epochs, train_loss_list, 'r' , "Training Loss" ) plt.title('Training Loss' )
모델 테스트
1 2 3 4 5 6 7 8 9 10 from PIL import Imageimage = Image.open ('test_image.jpg' ) image = transforms_test(image).unsqueeze(0 ).to(device) with torch.no_grad(): outputs = model(image) _, preds = torch.max (outputs, 1 ) imshow(image.cpu().data[0 ], title='result: ' + class_names[preds[0 ]])
테스트 결과
전체 소스 코드: https://colab.research.google.com/drive/1lpN_TaM_HOyHQIJIW1DauymGmR36_n54#scrollTo=BpDUInMGr1ep