data/pytorch-vision-0.8.1/examples/python/video_api.ipynb:518: asume ==> assume data/pytorch-vision-0.8.1/packaging/windows/internal/check_deps.bat:55: virual ==> virtual, viral data/pytorch-vision-0.8.1/packaging/windows/internal/clone.bat:3: seperated ==> separated data/pytorch-vision-0.8.1/packaging/windows/internal/nightly_defaults.bat:9: packge ==> package data/pytorch-vision-0.8.1/references/detection/coco_utils.py:122: critera ==> criteria data/pytorch-vision-0.8.1/test/common_utils.py:192: substraction ==> subtraction data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:131: nd ==> and, 2nd data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:161: ue ==> use, due data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:161: Ba ==> By, be data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:256: naX ==> max, nad data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:256: Ket ==> Kept data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:307: nD ==> and, 2nd data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:307: weAS ==> was data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:307: naNE ==> name data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:354: teY ==> they data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:354: nd ==> and, 2nd data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:354: Fo ==> Of, for data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:354: tE ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:354: MyBE ==> maybe data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:377: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:377: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:377: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:377: COO ==> COUP data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:403: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:403: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:403: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:403: Nd ==> And, 2nd data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:403: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:403: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:403: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:429: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:429: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:429: TBE ==> THE data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:429: Nd ==> And, 2nd data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:429: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: VIE ==> VIA data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: paLN ==> plan, pain, palm data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: ND ==> AND, 2ND data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: WhIs ==> this data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: Edn ==> End data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: Nd ==> And, 2nd data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: Nd ==> And, 2nd data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:455: te ==> the, be, we data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:481: Nd ==> And, 2nd data/pytorch-vision-0.8.1/test/sanity_checks.ipynb:481: te ==> the, be, we data/pytorch-vision-0.8.1/test/test_models_detection_utils.py:25: paramter ==> parameter data/pytorch-vision-0.8.1/test/test_models_detection_utils.py:40: paramter ==> parameter data/pytorch-vision-0.8.1/test/test_models_detection_utils.py:55: paramter ==> parameter data/pytorch-vision-0.8.1/test/test_ops.py:59: opeartions ==> operations data/pytorch-vision-0.8.1/test/test_video.py:130: ealier ==> earlier data/pytorch-vision-0.8.1/test/test_video_reader.py:135: ealier ==> earlier data/pytorch-vision-0.8.1/torchvision/csrc/cpu/ROIAlign_cpu.cpp:94: indeces ==> indices data/pytorch-vision-0.8.1/torchvision/csrc/cpu/ROIAlign_cpu.cpp:166: indeces ==> indices data/pytorch-vision-0.8.1/torchvision/csrc/cpu/ROIAlign_cpu.cpp:166: chanels ==> channels data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/audio_sampler.cpp:68: faield ==> failed data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/audio_sampler.cpp:119: faield ==> failed data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/audio_sampler.cpp:135: faield ==> failed data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/audio_sampler.cpp:143: faield ==> failed data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/audio_sampler.cpp:161: faield ==> failed data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/defs.h:55: orignal ==> original data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/defs.h:287: diffrent ==> different data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/memory_buffer.cpp:64: capabilty ==> capability data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/sync_decoder_test.cpp:367: capabilty ==> capability data/pytorch-vision-0.8.1/torchvision/csrc/cpu/decoder/sync_decoder_test.cpp:406: capabilty ==> capability data/pytorch-vision-0.8.1/torchvision/csrc/cpu/video/Video.cpp:198: calback ==> callback data/pytorch-vision-0.8.1/torchvision/csrc/cpu/video/Video.cpp:255: calback ==> callback data/pytorch-vision-0.8.1/torchvision/csrc/cpu/video/Video.cpp:279: calback ==> callback data/pytorch-vision-0.8.1/torchvision/csrc/cpu/video/Video.cpp:286: exeption ==> exception, exemption data/pytorch-vision-0.8.1/torchvision/csrc/cpu/video/Video.cpp:293: successfull ==> successful data/pytorch-vision-0.8.1/torchvision/csrc/cpu/video/Video.h:39: retruns ==> returns data/pytorch-vision-0.8.1/torchvision/csrc/cpu/video/Video.h:40: comination ==> combination data/pytorch-vision-0.8.1/torchvision/datasets/fakedata.py:13: datset ==> dataset data/pytorch-vision-0.8.1/torchvision/datasets/mnist.py:344: ot ==> to, of, or data/pytorch-vision-0.8.1/torchvision/datasets/mnist.py:472: nd ==> and, 2nd data/pytorch-vision-0.8.1/torchvision/datasets/mnist.py:474: nd ==> and, 2nd data/pytorch-vision-0.8.1/torchvision/datasets/mnist.py:474: nd ==> and, 2nd data/pytorch-vision-0.8.1/torchvision/datasets/mnist.py:477: nd ==> and, 2nd data/pytorch-vision-0.8.1/torchvision/datasets/mnist.py:478: nd ==> and, 2nd data/pytorch-vision-0.8.1/torchvision/io/__init__.py:76: acces ==> access data/pytorch-vision-0.8.1/torchvision/io/__init__.py:142: acces ==> access data/pytorch-vision-0.8.1/torchvision/io/__init__.py:147: succes ==> success data/pytorch-vision-0.8.1/torchvision/io/_video_opt.py:196: orignal ==> original data/pytorch-vision-0.8.1/torchvision/io/_video_opt.py:360: orignal ==> original data/pytorch-vision-0.8.1/torchvision/models/mnasnet.py:78: rouding ==> rounding data/pytorch-vision-0.8.1/torchvision/models/detection/backbone_utils.py:88: wont ==> won't data/pytorch-vision-0.8.1/torchvision/models/detection/roi_heads.py:301: does'nt ==> doesn't data/pytorch-vision-0.8.1/torchvision/models/detection/rpn.py:233: throught ==> thought, through, throughout data/pytorch-vision-0.8.1/torchvision/ops/_box_convert.py:55: bouding ==> bounding data/pytorch-vision-0.8.1/torchvision/transforms/_functional_video.py:63: dimenions ==> dimensions data/pytorch-vision-0.8.1/torchvision/transforms/_transforms_video.py:132: dimenions ==> dimensions data/pytorch-vision-0.8.1/torchvision/transforms/_transforms_video.py:153: horizonal ==> horizontal data/pytorch-vision-0.8.1/torchvision/transforms/functional_tensor.py:575: occuring ==> occurring data/pytorch-vision-0.8.1/torchvision/transforms/transforms.py:1045: lenght ==> length