πŸ–₯️ Programming

πŸ–₯️ Programming/Python

[Python] λ‚΄κ°€ μž‘μ„±ν•œ λͺ¨λ“ˆμ„ μžλ™μœΌλ‘œ reload ν•˜κΈ° - autoreload

%load_ext atoreload %autoreload 2 `%autoreload 2`의 μ˜λ―ΈλŠ” 파이썬 μ½”λ“œλ₯Ό μ‹€ν–‰ν•˜κΈ° 전에 항상 λͺ¨λ“ˆμ„ Reloadν•˜λΌλŠ” μ˜λ―Έμ΄λ‹€. λͺ¨λ“ˆμ΄ μ‚¬μš© 쀑인 κ²½μš°μ—λ„ μžλ™μœΌλ‘œ λ‹€μ‹œ λ‘œλ“œ 같은 디렉토리 λ‚΄ λͺ¨λ“ˆμ„ λΆˆλŸ¬μ˜€λŠ” 데 μ‚¬μš© Reference [1] https://m.blog.naver.com/wideeyed/221225290242 [2] https://rogerheederer.github.io/Colab_ModuleReload/

πŸ–₯️ Programming/Pytorch

[Pytorch] if __name__ == '__main__'

파이썬 μ½”λ“œλ₯Ό 보면 `if __name__ == '__main__':` 이런 μ½”λ“œλ₯Ό 확인할 수 μžˆλ‹€. μ΄λŠ” 파이썬 μ½”λ“œλ₯Ό λͺ¨λ“ˆλ‘œ μ‚¬μš©ν•  λ•Œμ™€ 직접 μ‹€ν–‰ν•  λ•Œλ₯Ό κ΅¬λΆ„ν•˜κΈ° μœ„ν•΄ μ‚¬μš©ν•˜λŠ” μ½”λ“œ νŒ¨ν„΄μœΌλ‘œ. ν•΄λ‹Ή λͺ¨λ“ˆμ΄ μž„ν¬νŠΈλœ κ²½μš°κ°€ μ•„λ‹ˆλΌ μΈν„°ν”„λ¦¬ν„°μ—μ„œ 직접 μ‹€ν–‰λœ κ²½μš°μ—λ§Œ ifλ¬Έ μ½”λ“œλ₯Ό μ‹€ν–‰ν•˜λΌλŠ” λͺ…령어이닀. λ‹€μŒκ³Ό 같은 μ½”λ“œ μ˜ˆμ‹œκ°€ μžˆλ‹€κ³  ν•˜μž. # my_module.py def my_function(): print("λͺ¨λ“ˆ λ‚΄ ν•¨μˆ˜ μ‹€ν–‰") if __name__ == '__main__': print("직접 싀행됨") μ—¬κΈ°μ„œ λ‹€λ₯Έ λͺ¨λ“ˆ `another_module.py`μ—μ„œ λ‹€μŒκ³Ό 같이 `my_function()`μ΄λΌλŠ” ν•¨μˆ˜λ₯Ό ν˜ΈμΆœν•˜λ©΄ "λͺ¨λ“ˆ λ‚΄ ν•¨μˆ˜ μ‹€ν–‰"μ΄λΌλŠ” μ‹€ν–‰ κ²°κ³Όκ°€ λ‚˜μ˜¨λ‹€. # another_mo..

πŸ–₯️ Programming/Pytorch

[Pytorch] λͺ¨λΈ ν•™μŠ΅ 및 예츑 μ½”λ“œ - torch.no_grad(), loss.backward(), optimizer.step(), optimizer_zero_grad()

def training(epoch, model, trainloader, validloader): correct = 0 total = 0 running_loss = 0 model.train() for b in trainloader: x, y = b.text, b.label x, y = x.to(device), y.to(device) # λ°˜λ“œμ‹œ λͺ¨λΈκ³Ό 같은 device y_pred = model(x) loss = loss_fn(y_pred, y) optimizer.zero_grad() loss.backward() optimizer.step() with torch.no_grad(): # μΆ”λ‘  y_pred = torch.argmax(y_pred, dim=1) correct += (y_pred == y).sum(..

πŸ–₯️ Programming/Pytorch

[Pytorch] F.relu()와 nn.ReLU()의 차이점

nn.ReLU() νŒŒμ΄ν† μΉ˜μ˜ 신경망 λͺ¨λ“ˆ 쀑 ν•˜λ‚˜μΈ ν™œμ„±ν™” ν•¨μˆ˜ λͺ¨λ“ˆ `nn.Sequential()` λͺ¨λΈμ— μΆ”κ°€ν•  수 μžˆλŠ” `nn.Module`을 λ§Œλ“¬ λͺ¨λΈμ˜ `__init__` λ©”μ„œλ“œμ—μ„œ λ ˆμ΄μ–΄λ‘œ μ΄ˆκΈ°ν™”λ˜λ©°, κ·Έ λ ˆμ΄μ–΄κ°€ `forward()` λ©”μ„œλ“œμ—μ„œ μ‚¬μš©λ  λ•Œ ν™œμ„±ν™” ν•¨μˆ˜κ°€ 적용 F.relu() νŒŒμ΄ν† μΉ˜μ˜ ν•¨μˆ˜ 라이브러리인 `torch.nn.functional`의 ν•¨μˆ˜ ν•¨μˆ˜λ‘œμ„œ 주둜 λͺ¨λΈμ˜ forward μ—°μ‚° λ‚΄μ—μ„œ ν™œμ„±ν™” ν•¨μˆ˜λ₯Ό μ μš©ν•˜λŠ” 데 μ‚¬μš© λͺ¨λΈμ˜ forward λ©”μ„œλ“œμ—μ„œ ν•¨μˆ˜λ‘œ 직접 호좜 https://discuss.pytorch.org/t/whats-the-difference-between-nn-relu-vs-f-relu/27599 https://asidefine.tistory.com/80

Junyeong Son
'πŸ–₯️ Programming' μΉ΄ν…Œκ³ λ¦¬μ˜ κΈ€ λͺ©λ‘