Skip to content

Instantly share code, notes, and snippets.

@AmosLewis
Last active August 14, 2024 17:15
Show Gist options
  • Save AmosLewis/dd31ab37517977b1c499d06495b4adc2 to your computer and use it in GitHub Desktop.
Save AmosLewis/dd31ab37517977b1c499d06495b4adc2 to your computer and use it in GitHub Desktop.
cmake -GNinja -Bbuild \
-DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_C_COMPILER=clang \
-DCMAKE_CXX_COMPILER=clang++ \
-DPython3_FIND_VIRTUALENV=ONLY \
-DLLVM_ENABLE_PROJECTS=mlir \
-DLLVM_EXTERNAL_PROJECTS="torch-mlir;torch-mlir-dialects" \
-DLLVM_EXTERNAL_TORCH_MLIR_SOURCE_DIR=`pwd` \
-DLLVM_EXTERNAL_TORCH_MLIR_DIALECTS_SOURCE_DIR=`pwd`/externals/llvm-external-projects/torch-mlir-dialects \
-DMLIR_ENABLE_BINDINGS_PYTHON=ON \
-DLLVM_TARGETS_TO_BUILD=host \
externals/llvm-project/llvm
cmake --build build --target tools/torch-mlir/all
git submodule update --init --progressgit submodule update --init --progress
git add -u
git commit --amend --no-edit
git reset --hard HEAD~1
git push origin as_stride --force
pip3 install clang-format
git clang-format HEAD~1
torch-mlir-opt -convert-torch-to-tosa /tmp/index.mlir | externals/llvm-project/mlir/utils/generate-test-checks.py
--convert-torch-to-linalg
--torch-backend-to-linalg-on-tensors-backend-pipeline
torch-mlir-opt --convert-torch-onnx-to-torch --torch-decompose-complex-ops --cse --canonicalize --convert-torch-to-linalg reshape.default.onnx.mlir --debug
torch-mlir-opt -convert-torch-to-tosa /tmp/index.mlir -mlir-print-ir-after-all -mlir-disable-threading --mlir-print-ir-before-all --debug
torch-mlir-opt --mlir-elide-elementsattrs-if-larger=4 non_elided.mlir > elided.mlir
grep -r "AveragePool" Inception_v4_vaiq_int8.default.torch-onnx.mlir
@AmosLewis
Copy link
Author

@AmosLewis
Copy link
Author

AmosLewis commented Dec 21, 2023

Find the CPU info by running lscpu.
Then use CPU family and Model number to figure out what microarchitecture is. https://en.wikichip.org/wiki/intel/cpuid You will get it if you search family 6 model 63 for haswell on the web page.
Then we know we need this flag in iree-compile
--iree-llvmcpu-target-cpu=haswell

@AmosLewis
Copy link
Author

AmosLewis commented Jan 16, 2024

az network bastion ssh --name "bastion-server-east1" --resource-group "pdue-nod-ai-rg" --target-ip-address "10.0.0.8" --auth-type "ssh-key" --username "chi" --ssh-key "C:\Users\chiliu12\chi-cpu_key.pem"

@AmosLewis
Copy link
Author

AmosLewis commented Apr 2, 2024

ONNX e2eshark test add attributes.

import onnx
from onnx import numpy_helper

# Create an Add node
add_node = onnx.helper.make_node("Add", inputs=["A", "B"], outputs=["C"])

# Set an attribute value (e.g., alpha)
add_node.attribute.append(onnx.helper.make_attribute("alpha", 2.0))

# Serialize the ONNX graph
graph = onnx.helper.make_graph([add_node], "add_graph", inputs=[...], outputs=[...])
model = onnx.helper.make_model(graph)
onnx.save(model, "add_model.onnx")

@AmosLewis
Copy link
Author

AmosLewis commented Apr 25, 2024

Shark-TestSuites/e2eshark useful cmd:
Run one onnx model:
python ./run.py --torchmlirbuild ../../torch-mlir/build --tolerance 0.001 0.001 --cachedir ./huggingface_cache --runupto torch-mlir --torchtolinalg --ireebuild ../../iree-build --tests onnx/models/retinanet_resnet50_fpn_vaiq_int8

Run all onnx model:
python ./run.py --torchmlirbuild ../../torch-mlir/build --tolerance 0.001 0.001 --cachedir ./huggingface_cache --runupto iree-compile --torchtolinalg --ireebuild ../../iree-build --report

Run one op:
python run.py -c ../../torch-mlir/build/ -i ../../iree-build/ -f onnx --tests onnx/operators/ReduceProdKeepdims0 --cachedir cachedir --report --runupto torch-mlir --torchtolinalg

Run all the pytorch model
python ./run.py --torchmlirbuild ../../torch-mlir/build --tolerance 0.001 0.001 --cachedir ./huggingface_cache --ireebuild ../../ iree-build -runupto iree-compile -f pytorch -g models --mode onnx
Run one pytorch model
python ./run.py --torchmlirbuild ../../torch-mlir/build --tolerance 0.001 0.001 --cachedir ./huggingface_cache --ireebuild ../../ iree-build -runupto iree-compile -f pytorch -g models --mode onnx --tests onnx/models/retinanet_resnet50_fpn_vaiq_int8

@AmosLewis
Copy link
Author

AmosLewis commented May 8, 2024

upload big zip file from vm to az storave
az storage blob upload --account-name onnxstorage --container-name onnxstorage --name bugcases/torchtolinalgpipelineissue.zip --file torchtolinalgpipelineissue.zip --auth-mode key

@AmosLewis
Copy link
Author

pip install \
            --find-links https://github.com/llvm/torch-mlir-release/releases/expanded_assets/dev-wheels \
            --upgrade \
            torch-mlir
pip install \
            --find-links https://iree.dev/pip-release-links.html \
            --upgrade \
            iree-compiler \
            iree-runtime

@AmosLewis
Copy link
Author

run iree_tests

 pytest SHARK-TestSuite/iree_tests/onnx/ \
  -rpfE \
  --numprocesses 24 \
  --timeout=30 \
  --durations=20 \
  --no-skip-tests-missing-files \
  --config-files=/proj/gdba/shark/chi/src/iree/build_tools/pkgci/external_test_suite/onnx_cpu_llvm_sync.json \
  --report-log=/proj/gdba/shark/chi/src/iree_log.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment