Skip to content

Commit 5780b86

Browse files
authored
Brianma/windowsai fi (#2475)
* update dockerfiles/README (#2336) * Make elementwise op run 4 items per thread (#2335) Description: Describe your changes. Make elementwise op run 4 items per thread unroll for loop to leverage ILP remove unnessary N==0 check inside elementwise GPU kernel Motivation and Context Why is this change required? What problem does it solve? It can improve the performance of GPU elementwise ops. ~2% performance gain on popular NLP bert model. If it fixes an open issue, please link to the issue here. * Add CUDA GatherElements kernel (#2310) * Updates * Update test * Update * Updates * nits * PR feedback * Update * Update * PR feedback * PR comments * Update * Fix build * Fix build * Nits * Fix * Layer Normalization Fusion (#2319) basic layer normalization transform * Add FastGelu Cuda Op for Gelu and Add bias fusion (#2293) * Add FastGelu cuda op * Add AddBiasGelu for experiment * Revert "Add AddBiasGelu for experiment" This reverts commit 5c1ee01. * Add bias * Add unit tests * update comment * update script * fix build error * update coding style * update for CR feedback Enable half2 optimization only when cuda arch >= 7.0 * move _Tanh to common.cuh * implement CPU contrib OP Attention (#2333) * Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanUnusedInitializers. (#2320) * Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanupUnusedInitializers. This means initializers that have been replaced during graph optimizations are not left in the GraphProto when we save an optimized model. * Handle edge case where a model has an unused initializer with matching graph input by also removing the graph input. * Use non-const iterators in std::find_if calls to make centos build happy. * Nuget pipeline changes (#2305) 1. refactor the pipeline, remove some duplicated code 2. Move Windows_py_GPU_Wheels job to Win-GPU-CUDA10. We'll deprecated the "Win-GPU" pool 3. Delete cpu-nocontribops-esrp-pipeline.yml and cpu-nocontribops-pipeline.yml 4. In Linux nuget jobs, run "make install" before creating the package. So that extra RPAH info will be removed * Cuda Reverse Sequence Op, maping types of same size using same template function. (#2281) * Set ElementType to String type of node metadata, instead of byte[] (#2348) * Set ElementType to String type of node metadata, instead of byte[] * Fix spacing * Introduce PrimitiveType into a Type System along with an integer constant (#2307) Improve perf by avoiding GetType<T>() calls. Introduce MLTypeCallDispatcher to switch on Input Type. Add Tensor IsType<T>() fast method. * Fix/test dim value of 0 handling in a couple of places (#2337) * Update the CUDA Where implementation broadcasting logic to handle a dim with value of 0. Add unit test Also add unit test for unary op with dim value of 0 * Exclude ngraph from Where test with 0 dim. * Openvino EP R3.1 onnxrt server (#2357) * onnxrt server with OVEP * onnxrt server with OVEP * Update Dockerfile.server.openvino * onnxrt server OVEP fix reviews * onnxrt server OVEP fix reviews * Implement cuda nonzero op. (#2056) Implement cuda nonzero op. * Direct use python numpy array's memory if already contiguous. (#2355) * Direct use python numpy array's memory if already contiguous. This could greatly improve performance for session with large input, like big image 1920x1080 fastrcnn, 30~40% speed up could be achieved. * Add test case enforce contiguous/non-contiguos numpy array as inputs. * Add helper to create output to minimize binary size. (#2365) Add ConstEigenTensorMap typedef so we don't unnecessarily const_cast the const input Tensor. * fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS (#2369) * fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS * update * Add Tracelogging for profiling (#1639) Enabled only if onnxruntime_ENABLE_INSTRUMENT is ON * test bidaf with nuphar for avx target (#2370) increase nuphar test coverage a bit * Fix a bug in TLS refcount that may destabilized CUDA CI (#2374) * update output size calculation for resize (#2366) * change how output size is calculated for resize op * add tests for ver 10 resize * Extend OneHot CPU kernel to support more types (#2311) * Extend OneHot CPU kernel to support input int64_t, depth int32_t, output float * Skip BERT before the test data fix is picked up * Fix bug with Slice. Need to pass in flattened input dimensions so the initial offset into the input is calculated correctly. (#2372) * Add opset 11 version of Split to CUDA ops (#2376) Organize the CUDA ops definitions so all the opset 10 and 11 parts are together (same setup used for CPU ops) * Layer Norm Fusion Fix (#2379) * layer norm fusion fix * Add input shape check in code and unit tests * Fuse Add + Gelu (#2360) Implement the transformer to fuse add + gelu Implement the accurate kernel * Skip layer norm transform (#2350) * skip layer normalization transformer * Another try to stabilize CUDA CI (#2383) The root cause seems to be failure in CUDA dealloc when tear down. cudaFree return code was ignored before, so should the debug check. * fix BUILD.md typo (#2375) build.py: error: argument --config: invalid choice: 'RelWithDebugInfo' (choose from 'Debug', 'MinSizeRel', 'Release', 'RelWithDebInfo') * Fixed compilation with ngraph (#2388) * Fix reuse logic in allocation planner. (#2393) * Fix reuse logic in allocation planner. * PR comments * Add helpful comments * Don't allow reuse across string tensors. * [NupharEP] Multiple optimizations (#2380) Fuse transpose into MatMul Implement Pow and constant scalar simplification Vectorize ReduceMean Improve symbolic shape inference Minor updates for better debugging in fused function name * Avoid using the default logger in the graph lib and optimizers (#2361) 1. Use the session logger if it is available. 2. Don't disable warning 4100 globally. We should fix the warnings instead of disabling it. * Change CUDA implementation of Transpose to support all fixed size tensor types (#2387) * Change CUDA implementation of Transpose to not use a typed kernel so we can support more types with minimum binary size. Add support for 8, 16, 32 and 64 bit types. Add unit tests. Add method so the implementation can be called directly (will be used by CUDA Scan very soon). * Disable TensorRT for MLFloat16 and int8 unit tests. * Address PR comment and add support for calling cublas implementation if type is mlfloat16. * Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added. (#2398) * Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added. * [NupharEP] force some low/zero cost ops to be inlined (#2409) * fix cross compile bug (#2415) * Minor optimization: if a node has already been placed, there's no need to find a kernel for it. (#2417) * Add Reshape Fusion (#2395) * Add reshape fusion * Add some comments * update comments * update comment format * update according to feedback * update for recent logger change * fix build error * (1) Support both input and output edges in find path in graphutils (2) Add a test case of only one constant initializer of Concat input. (3) Refactor ReshapeFusion class to allow add more subgraph fusion in the future. * fix error * (1) loose constraint on initializer: non constant is allowed for reshape fusion. (2) Change versions type to vector. (3) Add logging. (4) Return false when multiple output edges matched in FindPath. Add comments. * only allow one direction (input or output) in FindPath * [NupharEP] Update notebook and docker image (#2416) Add BERT squad in Nuphar tutorial Enhance speed comparsion readability * Fix the issue in matmul_add_fusion (#2407) Fix the issue in matmul_add_fusion If Muatmul + Add has shape [K] * [K, N], reset it to [1, K] * [K, N] will make the output shape to [1, N] will also requires a reshape on the output. Fix: just remove the shape reset to not fuse it. Add a negative test case for matmul+add fusion * feat(treeregressor): Update TreeEnsembleRegressor for type support (#2389) Updates the `TreeEnsembleRegressor` to allow for `double`, `float`, `int64`, and `int32` inputs to match the upstream specification. Signed-off-by: Nick Groszewski <nicholas.groszewski@capitalone.com> * onnxrt server documentation update (#2396) * Added support for Pad-2 operator in OpenVINO-EP (#2405) * Add CUDA If operator. (#2377) * Add CUDA If operator. Uses CPU operator for implementation. By adding a CUDA version the inputs/outputs (with the exception of the 'cond' input) stay on GPU, and no other logic is required to avoid a copy to CPU across the control flow node. * Improved documentation for onnxruntime::utils::SwapByteOrderCopy(), added precondition check. * Fix the type constraints on CUDA If operator to exclude strings. (#2431) * add Im2col<uint8_t> (#2438) * Adjust codegen vectorization width from target (#2439) * Adjust codegen vectorization width from target * Add CUDA Scan operator. (#2403) * Add Scan CUDA op. Uses CPU implementation for logic. Added some device specific functors for handling when data needs to be manipulated on a different device. Added ability to override the materialization logic in the OrtValue slicer so DML can plugin their handling. * Fix Windows GPU C API packaging pipeline failure (#2440) Fix Windows GPU C API packaging pipeline failure (#2440) * Correctly handle implicit inputs for fused nodes (#2390) * Correctly handle implicit inputs for fused nodes Previously, nuphar's partitioning function didn't include node's implicit inputs into the inputs list of MetaDef, and hence a crash was triggered in the onnx graph checker. This commit fixed the issue. Furthermore, it also fixed a related issue where we didn't add implicit inputs into graph_inputs_excluding_initializers_ in Graph::SetGraphInputsOutputs. the issue was that graph_inputs_including_initializers_ populated by SetInputs (e.g. called by FunctionImpl::FunctionImpl) may contain implicit inputs which were not of any node's initializers in the graph. Because they were not part of any initializers, these implicit inputs couldn't be visited by going through all nodes' inputs. Consequently, they would *not* be added into graph_inputs_excluding_initializers_. We fixed the issue by first copying the populated graph_inputs_including_initializers_ into graph_inputs_excluding_initalizers_, which then had both initializers and non-initializers as its initial content. Later, we erase initializers from the list. In this way, we can ensure all implicit inputs to remain in graph_inputs_excluding_initializers_. * refined comments and fixed duplicates Address CR by revisiting comments in terms of implicit inputs Also fixed an issue by skipping duplicates while copying inputs from graph_inputs_including_initializers_. * address CR explain why we need to collect nodes' implicit inputs * don't rely on pointer values for iterating std::set Previously, openvino relied on iterating a set of NodeArg pointers to construct inputs and outputs for a fused graph. It could cause non-determinism. The reason was that although iterating std::set by itself is stable, pointer values of NodeArgs may vary. Consequently, we could end up visiting the set's elements in different orders for different runs for the same test, which resulted in constructing inputs (and outputs) with different orders to the fused graph. For example, for the same test, we may have inputs [A, B] in some runs but inputs[B, A] in others. Let's use std::string as the key type to avoid such nondeterminism. This commit also added implicit inputs into meta->inputs while returning the capability from the openvino provider. * Fixed another latent issue in openvino's GetCapability function The issue was that we couldn't simply erase fused_inputs and fused_outputs while iterating the nodes. For example, an output NodeArg may have multiple uses, and it's wrong if we erase it from fused_outputs when we encounter only one of its uses as input. * Remove DeviceAllocatorRegistry class (#2451) Remove DeviceAllocatorRegistry class * CSharp api and test for loading custom op shared library (#2420) - Added C-API test for loading custom op shared lib. - Made some changes in C++ api header and C-api implementation to get it working. - Added C# API and corresponding test for loading custom op shared library. * Parallel Gelu with ParallelFor (#2399) Parallel Gelu to get better performance for Gelu * Clean up build.py (#2446) * Pull the latest image before running docker build * Fuse SkipLayerNorm with Bias (#2453) Fuse SkipLayerNorm with Bias * Allow more than one invocation of CreateEnv in the same process. (#2467) * Allow more than one invocation of CreateEnv in the same process. * Fix centos build * Symbolic shape inference improvements: (#2460) * Symbolic shape inference improvements: - add a mode to guess unknown ops' output rank - add support for GatherND - add support for If - fix a bug in get_int_values when then tensor rank > 1D, by treating it as no sympy data - add symbol to literal merge when ONNX silently merges dims - fix a bug in Concat when input dim is 0 - fix a bug in ConstantOfShape that computed dim is not updated - add support for dynamic shape in ConstantOfShape - fix a bug in Loop output shape that loop iterator dim is not inserted at dim 0 - add support for dynamic padding in Pad - add support for dynamic shape in Reshape - add support for Resize with opset > 10, by treating output dims as dynamic - fix a bug in Slice when starts/ends are dynamic - restrict input model to opset 7 and above - make output model optional to avoid disk write when testing Run model tests for symbolic shape inference Reduce 2GB docker image size of nuphar * add additional test data set for nuget pipeline (#2448) * add SAS token to download internal test data for nuget pipeline * update azure endpoint * fix keyvault download step * fix variable declaration for secret group * fix indentation * fix yaml syntax for variables * fix setting secrets for script * fix env synctax * Fix macos pipeline * attempt to add secrets to windows download data * fix mac and win data download * fix windows data download * update test data set url and location
1 parent 0f5f17c commit 5780b86

File tree

420 files changed

+12563
-5426
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

420 files changed

+12563
-5426
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,3 +36,6 @@ onnxruntime_profile*.json
3636
docs/python/*.onnx
3737
*.onnx
3838
onnxprofile_profile_test_*.json
39+
/csharp/packages
40+
/csharp/src/Microsoft.ML.OnnxRuntime/Microsoft.ML.OnnxRuntime.targets
41+
/csharp/src/Microsoft.ML.OnnxRuntime/Microsoft.ML.OnnxRuntime.props

BUILD.md

Lines changed: 27 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ For other system requirements and other dependencies, please see [this section](
8080
|Description|Command|Additional description|
8181
|-----------|-----------|-----------|
8282
|**Basic build**|build.bat (Windows)<br>./build.sh (Linux)||
83-
|**Debug build**|--config RelWithDebugInfo|Debug build|
83+
|**Debug build**|--config RelWithDebInfo|Debug build|
8484
|**Use OpenMP**|--use_openmp|OpenMP will parallelize some of the code for potential performance improvements. This is not recommended for running on single threads.|
8585
|**Build using parallel processing**|--parallel|This is strongly recommended to speed up the build.|
8686
|**Build Shared Library**|--build_shared_lib||
@@ -107,6 +107,7 @@ The complete list of build options can be found by running `./build.sh (or .\bui
107107
**Options**
108108
* [OpenMP](#OpenMP)
109109
* [OpenBLAS](#OpenBLAS)
110+
* [DebugNodeInputsOutputs](#DebugNodeInputsOutputs)
110111
111112
**Architectures**
112113
* [x86](#x86)
@@ -260,19 +261,9 @@ See more information on the OpenVINO Execution Provider [here](./docs/execution_
260261
./build.sh --config RelWithDebInfo --use_openvino <hardware_option>
261262
```
262263
263-
264-
For Linux:
265-
266-
<code>./build.sh --config RelWithDebInfo --use_openvino <hardware_option> </code>
267-
268-
For Windows:
269-
270-
<code> .\build.bat --config RelWithDebInfo --use_openvino <hardware_option> </code>
271-
272-
*Note: The default Windows CMake Generator is Visual Studio 2017, but you can also use the newer Visual Studio 2019 by passing `--cmake_generator "Visual Studio 16 2019"` to `.\build.bat`*
273-
274264
<code>--use_openvino</code>: Builds the OpenVINO Execution Provider in ONNX Runtime.
275265
266+
<code>--build_server</code>: Using this flag in addition to --use_openvino builds the OpenVINO Execution Provider with ONNX Runtime Server.
276267
277268
* `<hardware_option>`: Specifies the hardware target for building OpenVINO Execution Provider. Below are the options for different Intel target devices.
278269
@@ -425,6 +416,30 @@ The DirectML execution provider supports building for both x64 and x86 architect
425416
426417
---
427418
419+
### DebugNodeInputsOutputs
420+
OnnxRuntime supports build options for enabling debugging of intermediate tensor shapes and data.
421+
#### Build Instructions
422+
##### Set onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS=1
423+
Dump tensor input/output shapes for all nodes to stdout.
424+
```
425+
# Linux
426+
./build.sh --cmake_extra_defines onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS=1
427+
# Windows
428+
.\build.bat --cmake_extra_defines onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS=1
429+
```
430+
##### Set onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS=2
431+
Dump tensor input/output shapes and output data for all nodes to stdout.
432+
```
433+
# Linux
434+
./build.sh --cmake_extra_defines onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS=2
435+
# Windows
436+
.\build.bat --cmake_extra_defines onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS=2
437+
```
438+
##### Set onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS=0
439+
To disable this functionality after previously enabling, set onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS=0 or delete CMakeCache.txt.
440+
441+
---
442+
428443
## Architectures
429444
### x86
430445
#### Build Intsructions

cmake/CMakeLists.txt

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,6 @@ if (onnxruntime_BUILD_CSHARP)
2323
check_language(CSharp)
2424
if (CMAKE_CSharp_COMPILER)
2525
enable_language(CSharp)
26-
set(CMAKE_DOTNET_TARGET_FRAMEWORK_VERSION v4.6.1)
2726
set(CMAKE_CSharp_FLAGS ${CMAKE_CSharp_FLAGS} "/langversion:6")
2827
message(STATUS "CMAKE_Csharp_Compiler = ${CMAKE_CSharp_COMPILER}")
2928
else()
@@ -86,6 +85,7 @@ option(onnxruntime_USE_DML "Build with DirectML support" OFF)
8685
option(onnxruntime_USE_WINML "Build with WinML support" OFF)
8786
option(onnxruntime_USE_ACL "Build with ACL support" OFF)
8887
option(onnxruntime_USE_TELEMETRY "Build with Telemetry" OFF)
88+
option(onnxruntime_ENABLE_INSTRUMENT "Enable Instrument with Event Tracing for Windows (ETW)" OFF)
8989

9090
set(protobuf_BUILD_TESTS OFF CACHE BOOL "Build protobuf tests" FORCE)
9191
#nsync tests failed on Mac Build
@@ -94,6 +94,15 @@ set(ONNX_ML 1)
9494
if(NOT onnxruntime_ENABLE_PYTHON)
9595
set(onnxruntime_ENABLE_LANGUAGE_INTEROP_OPS OFF)
9696
endif()
97+
98+
if(NOT WIN32)
99+
#TODO: On Linux we may try https://github.com/microsoft/TraceLogging
100+
if(onnxruntime_ENABLE_INSTRUMENT)
101+
message(WARNING "Instrument is only supported on Windows now")
102+
set(onnxruntime_ENABLE_INSTRUMENT OFF)
103+
endif()
104+
endif()
105+
97106
if(onnxruntime_USE_OPENMP)
98107
find_package(OpenMP)
99108
if (OPENMP_FOUND)
@@ -176,9 +185,7 @@ if (MSVC)
176185
set(gtest_force_shared_crt ON CACHE BOOL "Use shared (DLL) run-time lib for gtest" FORCE)
177186
endif()
178187
#Always enable exception handling, even for Windows ARM
179-
SET (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHsc")
180-
#Disable 4100 globally. Too many this kind errors in protobuf
181-
SET (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4100")
188+
SET (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHsc")
182189
if (NOT onnxruntime_USE_CUDA)
183190
SET (CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /Gw /GL")
184191
SET (CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} /Gw /GL")
@@ -534,7 +541,7 @@ if (onnxruntime_USE_NGRAPH)
534541
add_definitions(-DUSE_NGRAPH=1)
535542
include(ngraph)
536543
list(APPEND onnxruntime_EXTERNAL_LIBRARIES ngraph)
537-
list(APPEND onnxruntime_EXTERNAL_DEPENDENCIES ngraph)
544+
list(APPEND onnxruntime_EXTERNAL_DEPENDENCIES project_ngraph)
538545
endif()
539546

540547
if(onnxruntime_USE_OPENVINO)

cmake/external/ngraph.cmake

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,9 @@ if (MSVC)
7171
-Dprebuilt_ONNX_SOURCE_DIR=${prebuilt_ONNX_SOURCE_DIR}
7272
DEPENDS onnx
7373
)
74-
add_library(ngraph STATIC IMPORTED)
75-
set_property(TARGET ngraph PROPERTY IMPORTED_LOCATION ${ngraph_LIBRARIES}/ngraph.lib)
74+
add_library(ngraph SHARED IMPORTED)
75+
set_property(TARGET ngraph PROPERTY IMPORTED_LOCATION ${ngraph_LIBRARIES}/${NGRAPH_SHARED_LIB})
76+
set_property(TARGET ngraph PROPERTY IMPORTED_IMPLIB ${ngraph_LIBRARIES}/ngraph.lib)
7677
else()
7778
ExternalProject_Add(project_ngraph
7879
PREFIX ngraph

cmake/onnxruntime_csharp.cmake

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,29 @@ endif()
3636

3737
include(CSharpUtilities)
3838

39-
include_external_msproject(${CSHARP_MASTER_TARGET}
40-
${CSHARP_MASTER_PROJECT}
39+
include_external_msproject(Microsoft.ML.OnnxRuntime
40+
${CSHARP_ROOT}/src/Microsoft.ML.OnnxRuntime/Microsoft.ML.OnnxRuntime.csproj
4141
${CSHARP_DEPENDS}
4242
)
4343

44+
include_external_msproject(Microsoft.ML.OnnxRuntime.InferenceSample
45+
${CSHARP_ROOT}/sample/Microsoft.ML.OnnxRuntime.InferenceSample/Microsoft.ML.OnnxRuntime.InferenceSample.csproj
46+
${CSHARP_DEPENDS}
47+
)
48+
include_external_msproject(Microsoft.ML.OnnxRuntime.Tests
49+
${CSHARP_ROOT}/test/Microsoft.ML.OnnxRuntime.Tests/Microsoft.ML.OnnxRuntime.Tests.csproj
50+
${CSHARP_DEPENDS}
51+
)
52+
include_external_msproject(Microsoft.ML.OnnxRuntime.PerfTool
53+
${CSHARP_ROOT}/tools/Microsoft.ML.OnnxRuntime.PerfTool/Microsoft.ML.OnnxRuntime.PerfTool.csproj
54+
${CSHARP_DEPENDS}
55+
)
56+
57+
#Exclude them from the ALL_BUILD target, otherwise it will trigger errors like:
58+
#"Error : Project 'cmake\..\csharp\src\Microsoft.ML.OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj' targets 'netstandard1.1'. It cannot be referenced by a project that targets '.NETFramework,Version=v4.0'."
59+
#We can't fix it because cmake only supports the "TargetFrameworkVersion" property, not "TargetFramework".
60+
set_target_properties(Microsoft.ML.OnnxRuntime Microsoft.ML.OnnxRuntime.InferenceSample Microsoft.ML.OnnxRuntime.Tests Microsoft.ML.OnnxRuntime.PerfTool PROPERTIES EXCLUDE_FROM_ALL 1)
61+
4462
# generate Directory.Build.props
4563
set(DIRECTORY_BUILD_PROPS_COMMENT "WARNING: This is a generated file, please do not check it in!")
4664
configure_file(${CSHARP_ROOT}/Directory.Build.props.in

cmake/onnxruntime_framework.cmake

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,9 @@ file(GLOB_RECURSE onnxruntime_framework_srcs CONFIGURE_DEPENDS
1010
source_group(TREE ${REPO_ROOT} FILES ${onnxruntime_framework_srcs})
1111

1212
add_library(onnxruntime_framework ${onnxruntime_framework_srcs})
13-
13+
if(onnxruntime_ENABLE_INSTRUMENT)
14+
target_compile_definitions(onnxruntime_framework PRIVATE ONNXRUNTIME_ENABLE_INSTRUMENT)
15+
endif()
1416
target_include_directories(onnxruntime_framework PRIVATE ${ONNXRUNTIME_ROOT} PUBLIC ${CMAKE_CURRENT_BINARY_DIR})
1517
onnxruntime_add_include_to_target(onnxruntime_framework onnxruntime_common onnx onnx_proto protobuf::libprotobuf)
1618
set_target_properties(onnxruntime_framework PROPERTIES FOLDER "ONNXRuntime")

cmake/onnxruntime_providers.cmake

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ if (onnxruntime_USE_TENSORRT)
198198
add_definitions("-DONNX_ML=1")
199199
add_definitions("-DONNX_NAMESPACE=onnx")
200200
include_directories(${PROJECT_SOURCE_DIR}/external/protobuf)
201-
set(CUDA_INCLUDE_DIRS ${onnxruntime_CUDA_HOME}/include)
201+
set(CUDA_INCLUDE_DIRS ${CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES})
202202
set(TENSORRT_ROOT ${onnxruntime_TENSORRT_HOME})
203203
include_directories(${ONNXRUNTIME_ROOT}/../cmake/external/onnx)
204204
set(OLD_CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS})
@@ -231,11 +231,11 @@ if (onnxruntime_USE_TENSORRT)
231231
target_sources(getSupportedAPITest PRIVATE ${ONNXRUNTIME_ROOT}/test/win_getopt/mb/getopt.cc)
232232
target_include_directories(onnx2trt PRIVATE ${ONNXRUNTIME_ROOT}/test/win_getopt/mb/include)
233233
target_include_directories(getSupportedAPITest PRIVATE ${ONNXRUNTIME_ROOT}/test/win_getopt/mb/include)
234-
target_compile_options(nvonnxparser_static PRIVATE /FIio.h)
235-
target_compile_options(nvonnxparser PRIVATE /FIio.h)
236-
target_compile_options(trt_onnxify PRIVATE /FIio.h)
237-
target_compile_options(onnx2trt PRIVATE /FIio.h)
238-
target_compile_options(getSupportedAPITest PRIVATE /FIio.h)
234+
target_compile_options(nvonnxparser_static PRIVATE /FIio.h /wd4100)
235+
target_compile_options(nvonnxparser PRIVATE /FIio.h /wd4100)
236+
target_compile_options(trt_onnxify PRIVATE /FIio.h /wd4100)
237+
target_compile_options(onnx2trt PRIVATE /FIio.h /wd4100)
238+
target_compile_options(getSupportedAPITest PRIVATE /FIio.h /wd4100)
239239
endif()
240240
include_directories(${ONNXRUNTIME_ROOT}/../cmake/external/onnx-tensorrt)
241241
include_directories(${TENSORRT_INCLUDE_DIR})
@@ -273,7 +273,7 @@ if (onnxruntime_USE_NGRAPH)
273273
source_group(TREE ${ONNXRUNTIME_ROOT}/core FILES ${onnxruntime_providers_ngraph_cc_srcs})
274274
add_library(onnxruntime_providers_ngraph ${onnxruntime_providers_ngraph_cc_srcs})
275275
onnxruntime_add_include_to_target(onnxruntime_providers_ngraph onnxruntime_common onnxruntime_framework onnx onnx_proto protobuf::libprotobuf)
276-
add_dependencies(onnxruntime_providers_ngraph ngraph onnx ${onnxruntime_EXTERNAL_DEPENDENCIES})
276+
add_dependencies(onnxruntime_providers_ngraph project_ngraph onnx ${onnxruntime_EXTERNAL_DEPENDENCIES})
277277
set_target_properties(onnxruntime_providers_ngraph PROPERTIES FOLDER "ONNXRuntime")
278278
target_include_directories(onnxruntime_providers_ngraph PRIVATE ${ONNXRUNTIME_ROOT} ${ngraph_INCLUDE_DIRS})
279279
set_target_properties(onnxruntime_providers_ngraph PROPERTIES LINKER_LANGUAGE CXX)

cmake/onnxruntime_session.cmake

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,9 @@ source_group(TREE ${REPO_ROOT} FILES ${onnxruntime_session_srcs})
1212
add_library(onnxruntime_session ${onnxruntime_session_srcs})
1313
install(DIRECTORY ${PROJECT_SOURCE_DIR}/../include/onnxruntime/core/session DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/onnxruntime/core)
1414
onnxruntime_add_include_to_target(onnxruntime_session onnxruntime_common onnxruntime_framework onnx onnx_proto protobuf::libprotobuf)
15+
if(onnxruntime_ENABLE_INSTRUMENT)
16+
target_compile_definitions(onnxruntime_session PUBLIC ONNXRUNTIME_ENABLE_INSTRUMENT)
17+
endif()
1518
target_include_directories(onnxruntime_session PRIVATE ${ONNXRUNTIME_ROOT} ${eigen_INCLUDE_DIRS})
1619
add_dependencies(onnxruntime_session ${onnxruntime_EXTERNAL_DEPENDENCIES})
1720
set_target_properties(onnxruntime_session PROPERTIES FOLDER "ONNXRuntime")

cmake/onnxruntime_unittests.cmake

Lines changed: 33 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -478,7 +478,7 @@ endif()
478478

479479
add_library(onnx_test_data_proto ${TEST_SRC_DIR}/proto/tml.proto)
480480
if(WIN32)
481-
target_compile_options(onnx_test_data_proto PRIVATE "/wd4125" "/wd4456")
481+
target_compile_options(onnx_test_data_proto PRIVATE "/wd4125" "/wd4456" "/wd4100")
482482
endif()
483483
add_dependencies(onnx_test_data_proto onnx_proto ${onnxruntime_EXTERNAL_DEPENDENCIES})
484484

@@ -713,11 +713,15 @@ if (onnxruntime_BUILD_SERVER)
713713
set_source_files_properties("${TEST_SRC_DIR}/server/unit_tests/executor_test.cc" PROPERTIES COMPILE_FLAGS -Wno-unused-parameter)
714714
endif()
715715
endif()
716-
716+
717717
add_library(onnxruntime_test_utils_for_server ${onnxruntime_test_server_src})
718718
onnxruntime_add_include_to_target(onnxruntime_test_utils_for_server onnxruntime_test_utils_for_framework gtest gmock onnx onnx_proto server_proto server_grpc_proto)
719719
add_dependencies(onnxruntime_test_utils_for_server onnxruntime_server_lib onnxruntime_server_http_core_lib Boost ${onnxruntime_EXTERNAL_DEPENDENCIES})
720-
target_include_directories(onnxruntime_test_utils_for_server PUBLIC ${Boost_INCLUDE_DIR} ${REPO_ROOT}/cmake/external/re2 ${CMAKE_CURRENT_BINARY_DIR}/onnx ${ONNXRUNTIME_ROOT}/server ${ONNXRUNTIME_ROOT}/server/http ${ONNXRUNTIME_ROOT}/server/http/core ${ONNXRUNTIME_ROOT}/server/grpc ${ONNXRUNTIME_ROOT}/server ${ONNXRUNTIME_ROOT}/server/core PRIVATE ${ONNXRUNTIME_ROOT} )
720+
target_include_directories(onnxruntime_test_utils_for_server PUBLIC ${Boost_INCLUDE_DIR} ${REPO_ROOT}/cmake/external/re2 ${CMAKE_CURRENT_BINARY_DIR}/onnx ${ONNXRUNTIME_ROOT}/server ${ONNXRUNTIME_ROOT}/server/http ${ONNXRUNTIME_ROOT}/server/http/core ${ONNXRUNTIME_ROOT}/server/grpc ${ONNXRUNTIME_ROOT}/server ${ONNXRUNTIME_ROOT}/server/core PRIVATE ${ONNXRUNTIME_ROOT})
721+
if (onnxruntime_USE_OPENVINO)
722+
message(${OPENVINO_INCLUDE_DIR})
723+
target_include_directories(onnxruntime_test_utils_for_server PUBLIC ${OPENVINO_INCLUDE_DIR} ${OPENVINO_TBB_INCLUDE_DIR})
724+
endif()
721725
if(UNIX)
722726
target_compile_options(onnxruntime_test_utils_for_server PRIVATE "$<$<COMPILE_LANGUAGE:CUDA>:SHELL:-Xcompiler -Wno-error=sign-compare>"
723727
"$<$<NOT:$<COMPILE_LANGUAGE:CUDA>>:-Wno-error=sign-compare>")
@@ -772,6 +776,17 @@ if (onnxruntime_BUILD_SERVER)
772776

773777
endif()
774778

779+
#some ETW tools
780+
if(WIN32 AND onnxruntime_ENABLE_INSTRUMENT)
781+
add_executable(generate_perf_report_from_etl ${ONNXRUNTIME_ROOT}/tool/etw/main.cc ${ONNXRUNTIME_ROOT}/tool/etw/eparser.h ${ONNXRUNTIME_ROOT}/tool/etw/eparser.cc ${ONNXRUNTIME_ROOT}/tool/etw/TraceSession.h ${ONNXRUNTIME_ROOT}/tool/etw/TraceSession.cc)
782+
target_compile_definitions(generate_perf_report_from_etl PRIVATE "_CONSOLE" "_UNICODE" "UNICODE")
783+
target_link_libraries(generate_perf_report_from_etl PRIVATE tdh Advapi32)
784+
785+
add_executable(compare_two_sessions ${ONNXRUNTIME_ROOT}/tool/etw/compare_two_sessions.cc ${ONNXRUNTIME_ROOT}/tool/etw/eparser.h ${ONNXRUNTIME_ROOT}/tool/etw/eparser.cc ${ONNXRUNTIME_ROOT}/tool/etw/TraceSession.h ${ONNXRUNTIME_ROOT}/tool/etw/TraceSession.cc)
786+
target_compile_definitions(compare_two_sessions PRIVATE "_CONSOLE" "_UNICODE" "UNICODE")
787+
target_link_libraries(compare_two_sessions PRIVATE ${GETOPT_LIB_WIDE} tdh Advapi32)
788+
endif()
789+
775790
add_executable(onnxruntime_mlas_test ${TEST_SRC_DIR}/mlas/unittest.cpp)
776791
target_include_directories(onnxruntime_mlas_test PRIVATE ${ONNXRUNTIME_ROOT}/core/mlas/inc ${ONNXRUNTIME_ROOT})
777792
set(onnxruntime_mlas_test_libs onnxruntime_mlas onnxruntime_common)
@@ -781,3 +796,18 @@ endif()
781796
list(APPEND onnxruntime_mlas_test_libs Threads::Threads)
782797
target_link_libraries(onnxruntime_mlas_test PRIVATE ${onnxruntime_mlas_test_libs})
783798
set_target_properties(onnxruntime_mlas_test PROPERTIES FOLDER "ONNXRuntimeTest")
799+
800+
801+
add_library(custom_op_library SHARED ${REPO_ROOT}/onnxruntime/test/testdata/custom_op_library/custom_op_library.cc)
802+
target_include_directories(custom_op_library PRIVATE ${REPO_ROOT}/include)
803+
if(UNIX)
804+
if (APPLE)
805+
set(ONNXRUNTIME_CUSTOM_OP_LIB_LINK_FLAG "-Xlinker -dead_strip")
806+
else()
807+
set(ONNXRUNTIME_CUSTOM_OP_LIB_LINK_FLAG "-Xlinker --no-undefined -Xlinker --gc-sections")
808+
endif()
809+
else()
810+
set(ONNXRUNTIME_CUSTOM_OP_LIB_LINK_FLAG "-DEF:${REPO_ROOT}/onnxruntime/test/testdata/custom_op_library/custom_op_library.def /IGNORE:4199")
811+
# need to ignore the linker warning 4199, due to some global linker flags failing here
812+
endif()
813+
set_property(TARGET custom_op_library APPEND_STRING PROPERTY LINK_FLAGS ${ONNXRUNTIME_CUSTOM_OP_LIB_LINK_FLAG})

csharp/src/Microsoft.ML.OnnxRuntime/DisposableNamedOnnxValue.cs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@ private static DisposableNamedOnnxValue DisposableNamedOnnxValueFromNativeTensor
205205
{
206206
if (typeof(T) == typeof(string))
207207
{
208-
var nativeTensorWrapper = new NativeOnnxTensorMemory<byte>(nativeOnnxValue, true);
208+
var nativeTensorWrapper = new NativeOnnxTensorMemory<string>(nativeOnnxValue);
209209
var dt = new DenseTensor<string>(nativeTensorWrapper.GetBytesAsStringMemory(), nativeTensorWrapper.Dimensions);
210210
return new DisposableNamedOnnxValue(name, dt, nativeTensorWrapper);
211211
}
@@ -225,7 +225,7 @@ private static DisposableNamedOnnxValue DisposableNamedOnnxValueFromNativeMap<K,
225225
if (typeof(K) == typeof(string))
226226
{
227227
var map = new Dictionary<string, V>();
228-
var nativeTensorWrapper = new NativeOnnxTensorMemory<byte>(nativeOnnxValueKeys, true);
228+
var nativeTensorWrapper = new NativeOnnxTensorMemory<string>(nativeOnnxValueKeys);
229229
var denseTensorKeys = new DenseTensor<string>(nativeTensorWrapper.GetBytesAsStringMemory(), nativeTensorWrapper.Dimensions);
230230
for (var i = 0; i < denseTensorKeys.Length; i++)
231231
{

csharp/src/Microsoft.ML.OnnxRuntime/Microsoft.ML.OnnxRuntime.csproj

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,8 @@
1616
<TargetArchitecture Condition=" '$(TargetArchitecture)' == '' ">x64</TargetArchitecture>
1717

1818
<!--- packaging properties -->
19-
<PackageId>Microsoft.ML.OnnxRuntime</PackageId>
19+
<OrtPackageId Condition=" '$(OrtPackageId)' == '' ">Microsoft.ML.OnnxRuntime</OrtPackageId>
20+
<PackageId>$(OrtPackageId)</PackageId>
2021
<Authors>Microsoft</Authors>
2122
<Description>This package contains ONNX Runtime for .Net platforms</Description>
2223
<PackageTags>ONNX;ONNX Runtime;Machine Learning</PackageTags>

csharp/src/Microsoft.ML.OnnxRuntime/NamedOnnxValue.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -471,7 +471,7 @@ public static void GetTypeAndWidth(TensorElementType elemType, out Type type, ou
471471
width = sizeof(sbyte);
472472
break;
473473
case TensorElementType.String:
474-
type = typeof(byte);
474+
type = typeof(string);
475475
width = sizeof(byte);
476476
break;
477477
case TensorElementType.Bool:

csharp/src/Microsoft.ML.OnnxRuntime/NativeMethods.cs

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -185,6 +185,7 @@ static NativeMethods()
185185
OrtSetInterOpNumThreads = (DOrtSetInterOpNumThreads)Marshal.GetDelegateForFunctionPointer(api_.SetInterOpNumThreads, typeof(DOrtSetInterOpNumThreads));
186186
OrtSetIntraOpNumThreads = (DOrtSetIntraOpNumThreads)Marshal.GetDelegateForFunctionPointer(api_.SetIntraOpNumThreads, typeof(DOrtSetIntraOpNumThreads));
187187
OrtSetSessionGraphOptimizationLevel = (DOrtSetSessionGraphOptimizationLevel)Marshal.GetDelegateForFunctionPointer(api_.SetSessionGraphOptimizationLevel, typeof(DOrtSetSessionGraphOptimizationLevel));
188+
OrtRegisterCustomOpsLibrary = (DOrtRegisterCustomOpsLibrary)Marshal.GetDelegateForFunctionPointer(api_.RegisterCustomOpsLibrary, typeof(DOrtRegisterCustomOpsLibrary));
188189

189190
OrtCreateRunOptions = (DOrtCreateRunOptions)Marshal.GetDelegateForFunctionPointer(api_.CreateRunOptions, typeof(DOrtCreateRunOptions));
190191
OrtReleaseRunOptions = (DOrtReleaseRunOptions)Marshal.GetDelegateForFunctionPointer(api_.ReleaseRunOptions, typeof(DOrtReleaseRunOptions));
@@ -452,6 +453,9 @@ IntPtr[] outputValues /* An array of output value pointers. Array must be alloca
452453
public delegate IntPtr /*(OrtStatus*)*/DOrtAddFreeDimensionOverride(IntPtr /*(OrtSessionOptions*) */ options, string /*(const char*)*/ symbolic_dim, int dim_override);
453454
public static DOrtAddFreeDimensionOverride OrtAddFreeDimensionOverride;
454455

456+
public delegate IntPtr /*(OrtStatus*)*/DOrtRegisterCustomOpsLibrary(IntPtr /*(OrtSessionOptions*) */ options, string /*(const char*)*/ library_path, out IntPtr /* (void**) */ library_handle);
457+
public static DOrtRegisterCustomOpsLibrary OrtRegisterCustomOpsLibrary;
458+
455459
#endregion
456460

457461
#region RunOptions API

0 commit comments

Comments
 (0)