-
Notifications
You must be signed in to change notification settings - Fork 17
[Test] : pytorch_vision_meal_v2.md 번역 #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -17,37 +17,34 @@ order: 10 | |
| demo-model-link: https://huggingface.co/spaces/pytorch/MEAL-V2 | ||
| --- | ||
|
|
||
| We require one additional Python dependency | ||
| 추가로 1개의 파이썬 패키지를 설치해야 합니다. | ||
|
||
|
|
||
| ```bash | ||
| !pip install timm | ||
| ``` | ||
|
|
||
| ```python | ||
| import torch | ||
| # list of models: 'mealv1_resnest50', 'mealv2_resnest50', 'mealv2_resnest50_cutmix', 'mealv2_resnest50_380x380', 'mealv2_mobilenetv3_small_075', 'mealv2_mobilenetv3_small_100', 'mealv2_mobilenet_v3_large_100', 'mealv2_efficientnet_b0' | ||
| # load pretrained models, using "mealv2_resnest50_cutmix" as an example | ||
| # 모델 종류: 'mealv1_resnest50', 'mealv2_resnest50', 'mealv2_resnest50_cutmix', 'mealv2_resnest50_380x380', 'mealv2_mobilenetv3_small_075', 'mealv2_mobilenetv3_small_100', 'mealv2_mobilenet_v3_large_100', 'mealv2_efficientnet_b0' | ||
| # 사전에 학습된 "mealv2_resnest50_cutmix"을 로딩하는 예시입니다. | ||
|
||
| model = torch.hub.load('szq0214/MEAL-V2','meal_v2', 'mealv2_resnest50_cutmix', pretrained=True) | ||
| model.eval() | ||
| ``` | ||
|
|
||
| All pre-trained models expect input images normalized in the same way, | ||
| i.e. mini-batches of 3-channel RGB images of shape `(3 x H x W)`, where `H` and `W` are expected to be at least `224`. | ||
| The images have to be loaded in to a range of `[0, 1]` and then normalized using `mean = [0.485, 0.456, 0.406]` | ||
| and `std = [0.229, 0.224, 0.225]`. | ||
| 사전에 학습된 모든 모델은 동일한 방식으로 정규화된 입력 이미지, 즉, `H` 와 `W` 는 최소 `224` 이상인 `(3 x H x W)` 형태의 3-채널 RGB 이미지의 미니 배치를 요구합니다. 이미지를 `[0, 1]` 범위에서 로드한 다음 `mean = [0.485, 0.456, 0.406]` 과 `std = [0.229, 0.224, 0.225]` 를 통해 정규화합니다. | ||
|
|
||
| Here's a sample execution. | ||
| 실행 예시입니다. | ||
|
|
||
| ```python | ||
| # Download an example image from the pytorch website | ||
| # 파이토치 웹사이트에서 예제 이미지 다운로드 | ||
| import urllib | ||
| url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") | ||
| try: urllib.URLopener().retrieve(url, filename) | ||
| except: urllib.request.urlretrieve(url, filename) | ||
| ``` | ||
|
|
||
| ```python | ||
| # sample execution (requires torchvision) | ||
| # 실행 예시 (torchvision 필요) | ||
| from PIL import Image | ||
| from torchvision import transforms | ||
| input_image = Image.open(filename) | ||
|
|
@@ -58,32 +55,32 @@ preprocess = transforms.Compose([ | |
| transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), | ||
| ]) | ||
| input_tensor = preprocess(input_image) | ||
| input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model | ||
| input_batch = input_tensor.unsqueeze(0) # 모델에서 요구하는 미니배치 생성 | ||
|
|
||
| # move the input and model to GPU for speed if available | ||
| # 가능하다면 속도를 위해 입력과 모델을 GPU로 옮깁니다. | ||
| if torch.cuda.is_available(): | ||
| input_batch = input_batch.to('cuda') | ||
| model.to('cuda') | ||
|
|
||
| with torch.no_grad(): | ||
| output = model(input_batch) | ||
| # Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes | ||
| # shape이 1000이며 ImageNet의 1000개 클래스에 대한 신뢰도 점수(confidence score)가 있는 Tensor | ||
|
||
| print(output[0]) | ||
| # The output has unnormalized scores. To get probabilities, you can run a softmax on it. | ||
| # output엔 정규화되지 않은 신뢰도 점수가 있습니다. 확률 값을 얻으려면 소프트맥스를 실행하세요. | ||
|
||
| probabilities = torch.nn.functional.softmax(output[0], dim=0) | ||
| print(probabilities) | ||
| ``` | ||
|
|
||
| ``` | ||
| # Download ImageNet labels | ||
| # ImageNet 레이블 다운로드 | ||
| !wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt | ||
| ``` | ||
|
|
||
| ``` | ||
| # Read the categories | ||
| # 카테고리 읽기 | ||
| with open("imagenet_classes.txt", "r") as f: | ||
| categories = [s.strip() for s in f.readlines()] | ||
| # Show top categories per image | ||
| # 이미지별 Top5 카테고리 조회 | ||
| top5_prob, top5_catid = torch.topk(probabilities, 5) | ||
| for i in range(top5_prob.size(0)): | ||
| print(categories[top5_catid[i]], top5_prob[i].item()) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
해당글을 처음 들어왔는데 바로
추가로라는 단어로 시작하는게 뭔가 글 중간부터 읽는 느낌입니다!하나의 추가적인 패키지가 필요합니다. or 하나의 추가적인 패키지가 요구됩니다. 등의 느낌으로 수정하면 좋을 것 같아요.!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
아래
# sample execution (requires torchvision)문장에서 필요 라고 번역하셨으므로 하나의 추가적인 패키지가 필요합니다. 쪽이 더 나은거 같습니다.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
다시 읽어보니 확실히 뜬금없긴 하네요..!