Commit
735da5f5ad74ad139d3287c897be2057203a6032
by Vitaly Buka[NFC][sanitizer] Add static to internal functions
|
 | compiler-rt/lib/sanitizer_common/sanitizer_common_interceptors.inc |
Commit
d3a0a65bf01dccadee38d726b6c4d9813c84a048
by pmatosReland: "[WebAssembly] Add new pass to lower int/ptr conversions of reftypes"
Add new pass LowerRefTypesIntPtrConv to generate debugtrap instruction for an inttoptr and ptrtoint of a reference type instead of erroring, since calling these instructions on non-integral pointers has been since allowed (see ac81cb7e6).
Differential Revision: https://reviews.llvm.org/D107102
|
 | llvm/lib/Target/WebAssembly/CMakeLists.txt |
 | llvm/lib/Target/WebAssembly/WebAssemblyISelDAGToDAG.cpp |
 | llvm/lib/Target/WebAssembly/WebAssemblyLowerRefTypesIntPtrConv.cpp |
 | llvm/test/CodeGen/WebAssembly/externref-ptrtoint.ll |
 | llvm/lib/Target/WebAssembly/WebAssembly.h |
 | llvm/utils/gn/secondary/llvm/lib/Target/WebAssembly/BUILD.gn |
 | llvm/lib/Target/WebAssembly/WebAssemblyTargetMachine.cpp |
 | llvm/test/CodeGen/WebAssembly/externref-inttoptr.ll |
Commit
150395c2bcee8e9a4c876eada81515fc917ac3b6
by fmayer[hwasan] report failing thread for invalid free.
Reviewed By: hctim
Differential Revision: https://reviews.llvm.org/D107270
|
 | compiler-rt/test/hwasan/TestCases/double-free.c |
 | compiler-rt/lib/hwasan/hwasan_report.cpp |
Commit
b7fb5b54a93099cf3d7ac64f4a95d9942bc2e6a7
by martin[LLD] [MinGW] Support both "--opt value" and "--opt=value" for more options
This does the same fix as D107237 but for a couple more options, converting all remaining cases of such options to accept both forms, for consistency. This fixes building e.g. openldap, which uses --image-base=<value>.
Differential Revision: https://reviews.llvm.org/D107253
|
 | lld/test/MinGW/driver.test |
 | lld/MinGW/Options.td |
Commit
ce49fd024b43bd76b149f984b8f0d16e92b9bb06
by martin[clang] [MinGW] Let the last of -mconsole/-mwindows have effect
Don't just check for the existence of one, but check which one was specified last, if any.
This fixes https://llvm.org/PR51296.
Differential Revision: https://reviews.llvm.org/D107261
|
 | clang/lib/Driver/ToolChains/MinGW.cpp |
 | clang/test/Driver/mingw.cpp |
Commit
40202b13b23290a6e20900896838c2dbbfb281bd
by jay.foad[AMDGPU] Legalize operands of V_ADDC_U32_e32 and friends
These instructions have an implicit use of vcc which counts towards the constant bus limit. Pre gfx10 this means that the explicit operands cannot be sgprs. Use the custom inserter hook to call legalizeOperands to enforce that restriction.
Fixes https://bugs.llvm.org/show_bug.cgi?id=51217
Differential Revision: https://reviews.llvm.org/D106868
|
 | llvm/lib/Target/AMDGPU/SIISelLowering.cpp |
 | llvm/lib/Target/AMDGPU/VOP2Instructions.td |
 | llvm/test/CodeGen/AMDGPU/uaddo.ll |
Commit
a02bbeeae7fcaa25c6bdb4c98e2ec8ab5e83cd6d
by cullen.rhodes[AArch64][AsmParser] NFC: Use helpers in matrix tile list parsing
|
 | llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp |
Commit
0156f91f3b0af0c2b3c14eecb6192dbb039fc2d2
by david.sherwood[NFC] Rename enable-strict-reductions to force-ordered-reductions
I'm renaming the flag because a future patch will add a new enableOrderedReductions() TTI interface and so the meaning of this flag will change to be one of forcing the target to enable/disable them. Also, since other places in LoopVectorize.cpp use the word 'Ordered' instead of 'strict' I changed the flag to match.
Differential Revision: https://reviews.llvm.org/D107264
|
 | llvm/test/Transforms/LoopVectorize/AArch64/sve-strict-fadd-cost.ll |
 | llvm/test/Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll |
 | llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd.ll |
 | llvm/lib/Transforms/Vectorize/LoopVectorize.cpp |
 | llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd-cost.ll |
 | llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd-vf1.ll |
Commit
831910c5c4941b7c58d4d50d9e20808c8e2c1c0b
by dvyukovtsan: new MemoryAccess interface
Currently we have MemoryAccess function that accepts "bool kAccessIsWrite, bool kIsAtomic" and 4 wrappers: MemoryRead/MemoryWrite/MemoryReadAtomic/MemoryWriteAtomic.
Such scheme with bool flags is not particularly scalable/extendable. Because of that we did not have Read/Write wrappers for UnalignedMemoryAccess, and "true, false" or "false, true" at call sites is not very readable.
Moreover, the new tsan runtime will introduce more flags (e.g. move "freed" and "vptr access" to memory acccess flags). We can't have 16 wrappers and each flag also takes whole 64-bit register for non-inlined calls.
Introduce AccessType enum that contains bit mask of read/write, atomic/non-atomic, and later free/non-free, vptr/non-vptr. Such scheme is more scalable, more readble, more efficient (don't consume multiple registers for these flags during calls) and allows to cover unaligned and range variations of memory access functions as well.
Also switch from size log to just size. The new tsan runtime won't have the limitation of supporting only 1/2/4/8 access sizes, so we don't need the logarithms.
Also add an inline thunk that converts the new interface to the old one. For inlined calls it should not add any overhead because all flags/size can be computed as compile time.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D107276
|
 | compiler-rt/lib/tsan/rtl/tsan_external.cpp |
 | compiler-rt/lib/tsan/rtl/tsan_interface_atomic.cpp |
 | compiler-rt/lib/tsan/rtl/tsan_interface_inl.h |
 | compiler-rt/lib/tsan/rtl/tsan_rtl.h |
 | compiler-rt/lib/tsan/rtl/tsan_rtl_mutex.cpp |
 | compiler-rt/lib/tsan/rtl/tsan_fd.cpp |
 | compiler-rt/lib/tsan/rtl/tsan_rtl.cpp |
 | compiler-rt/lib/tsan/rtl/tsan_interface.cpp |
 | compiler-rt/lib/tsan/go/tsan_go.cpp |
 | compiler-rt/lib/tsan/rtl/tsan_interceptors_posix.cpp |
Commit
18c6ed2f0f293582570ad3f6419e10ff808ba98e
by dvyukovtsan: add AccessVptr
Add AccessVptr access type. For now it's converted to the same thr->is_vptr_access, but later it will be passed directly to ReportRace and will enable efficient tail calling in MemoryAccess function (currently __tsan_vptr_update/__tsan_vptr_read can't use tail calls in MemoryAccess because of the trailing assignment to thr->is_vptr_access).
Depends on D107276.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D107282
|
 | compiler-rt/lib/tsan/rtl/tsan_interface_inl.h |
 | compiler-rt/lib/tsan/rtl/tsan_rtl.h |
Commit
69396896fb615067b04a3e0c220f93bc91a10eec
by esme.yi[llvm-readobj][XCOFF] Fix the error dumping for the first item of StringTable.
Summary: For the string table in XCOFF, the first 4 bytes contains the length of the string table, so we should print the string entries from fifth bytes. This patch also adds tests for llvm-readobj dumping the string table.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D105522
|
 | llvm/test/tools/llvm-readobj/XCOFF/string-table.yaml |
 | llvm/test/tools/yaml2obj/XCOFF/long-symbol-name.yaml |
 | llvm/tools/llvm-readobj/ObjDumper.cpp |
 | llvm/tools/llvm-readobj/ObjDumper.h |
 | llvm/test/tools/yaml2obj/XCOFF/basic-doc64.yaml |
 | llvm/lib/Object/XCOFFObjectFile.cpp |
 | llvm/tools/llvm-readobj/XCOFFDumper.cpp |
Commit
d77b476c1953bcb0a608b2d6a4f2dd9fe0b43967
by dvyukovtsan: avoid extra call indirection in unaligned access functions
Currently unaligned access functions are defined in tsan_interface.cpp and do a real call to MemoryAccess. This means we have a real call and no read/write constant propagation.
Unaligned memory access can be quite hot for some programs (observed on some compression algorithms with ~90% of unaligned accesses).
Move them to tsan_interface_inl.h to avoid the additional call and enable constant propagation. Also reorder the actual store and memory access handling for __sanitizer_unaligned_store callbacks to enable tail calling in MemoryAccess.
Depends on D107282.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D107283
|
 | compiler-rt/lib/tsan/rtl/tsan_interface.cpp |
 | compiler-rt/lib/tsan/rtl/tsan_interface_inl.h |
Commit
4f4f2783056fd01182740251b2ce8a77b12684b3
by krasimir[clang-format] don't break between function and function name in JS
The patch https://reviews.llvm.org/D105964 (https://github.com/llvm/llvm-project/commit/58494c856a15f5b0e886c7baf5d505ac6c05dfe5) updated detection of function declaration names. It had the unfortunate consequence that it started breaking between `function` and the function name in some cases in JavaScript code.
This patch addresses this.
Reviewed By: MyDeveloperDay, owenpan
Differential Revision: https://reviews.llvm.org/D107267
|
 | clang/lib/Format/ContinuationIndenter.cpp |
 | clang/unittests/Format/FormatTestJS.cpp |
Commit
9b50844fd798b5a81afd4aeb44b053d622747a42
by vlad.vinogradov[mlir] Fix delayed object interfaces registration
Store both interfaceID and objectID as key for interface registration callback. Otherwise the implementation allows to register only one external model per one object in the single dialect.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D107274
|
 | mlir/lib/IR/Dialect.cpp |
 | mlir/unittests/IR/InterfaceAttachmentTest.cpp |
 | mlir/include/mlir/IR/Dialect.h |
Commit
0d8cd4e2d5d4abb804d40984522e0413c66a3cbd
by Jason Molenda[AArch64InstPrinter] Change printAddSubImm to comment imm value when shifted
Add a comment when there is a shifted value, add x9, x0, #291, lsl #12 ; =1191936 but not when the immediate value is unshifted, subs x9, x0, #256 ; =256 when the comment adds nothing additional to the reader.
Differential Revision: https://reviews.llvm.org/D107196
|
 | llvm/test/CodeGen/AArch64/stack-guard-sysreg.ll |
 | llvm/test/CodeGen/AArch64/srem-seteq.ll |
 | llvm/test/CodeGen/AArch64/vecreduce-bool.ll |
 | llvm/test/CodeGen/AArch64/GlobalISel/freeze.ll |
 | llvm/test/CodeGen/AArch64/extract-bits.ll |
 | llvm/test/CodeGen/AArch64/neg-abs.ll |
 | llvm/test/CodeGen/AArch64/fast-isel-branch-cond-split.ll |
 | llvm/test/CodeGen/AArch64/vec-libcalls.ll |
 | llvm/test/tools/llvm-objdump/ELF/AArch64/disassemble-align.s |
 | llvm/test/CodeGen/AArch64/arm64-rev.ll |
 | llvm/test/CodeGen/AArch64/split-vector-insert.ll |
 | llvm/test/CodeGen/AArch64/sve-lsr-scaled-index-addressing-mode.ll |
 | llvm/test/CodeGen/AArch64/ls64-inline-asm.ll |
 | llvm/test/CodeGen/AArch64/machine-outliner-thunk.ll |
 | llvm/test/CodeGen/AArch64/arm64-bitfield-extract.ll |
 | llvm/test/CodeGen/AArch64/addsub.ll |
 | llvm/test/CodeGen/AArch64/align-down.ll |
 | llvm/test/Transforms/LoopStrengthReduce/AArch64/small-constant.ll |
 | llvm/test/CodeGen/AArch64/combine-comparisons-by-cse.ll |
 | llvm/test/CodeGen/AArch64/statepoint-call-lowering.ll |
 | llvm/test/CodeGen/AArch64/select_const.ll |
 | llvm/test/CodeGen/AArch64/ssub_sat.ll |
 | llvm/test/CodeGen/AArch64/signed-truncation-check.ll |
 | llvm/test/CodeGen/AArch64/inc-of-add.ll |
 | llvm/test/CodeGen/AArch64/sve-extract-vector.ll |
 | llvm/test/CodeGen/AArch64/arm64-popcnt.ll |
 | llvm/test/CodeGen/AArch64/vecreduce-fmin-legalization.ll |
 | llvm/test/CodeGen/AArch64/sub-of-not.ll |
 | llvm/test/CodeGen/AArch64/sve-calling-convention-mixed.ll |
 | llvm/test/Transforms/CanonicalizeFreezeInLoops/aarch64.ll |
 | llvm/test/CodeGen/AArch64/GlobalISel/byval-call.ll |
 | llvm/test/CodeGen/AArch64/srem-lkk.ll |
 | llvm/test/Transforms/LoopStrengthReduce/AArch64/lsr-pre-inc-offset-check.ll |
 | llvm/test/CodeGen/AArch64/lack-of-signed-truncation-check.ll |
 | llvm/test/CodeGen/AArch64/uadd_sat.ll |
 | llvm/test/CodeGen/AArch64/unwind-preserved.ll |
 | llvm/test/CodeGen/AArch64/urem-seteq.ll |
 | llvm/test/CodeGen/AArch64/arm64-vabs.ll |
 | llvm/test/CodeGen/AArch64/check-sign-bit-before-extension.ll |
 | llvm/test/CodeGen/AArch64/use-cr-result-of-dom-icmp-st.ll |
 | llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll |
 | llvm/test/CodeGen/AArch64/vldn_shuffle.ll |
 | llvm/test/CodeGen/AArch64/sink-addsub-of-const.ll |
 | llvm/test/CodeGen/AArch64/aarch64_win64cc_vararg.ll |
 | llvm/test/CodeGen/AArch64/pow.ll |
 | llvm/test/CodeGen/AArch64/sat-add.ll |
 | llvm/test/CodeGen/AArch64/urem-seteq-nonzero.ll |
 | llvm/test/CodeGen/AArch64/uadd_sat_plus.ll |
 | llvm/test/CodeGen/AArch64/usub_sat_vec.ll |
 | llvm/test/CodeGen/AArch64/signbit-shift.ll |
 | llvm/test/CodeGen/AArch64/stack-guard-remat-bitcast.ll |
 | llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll |
 | llvm/test/CodeGen/AArch64/sve-split-extract-elt.ll |
 | llvm/test/CodeGen/AArch64/sub1.ll |
 | llvm/test/CodeGen/AArch64/hoist-and-by-const-from-shl-in-eqcmp-zero.ll |
 | llvm/test/CodeGen/AArch64/arm64-nvcast.ll |
 | llvm/test/CodeGen/AArch64/sve-insert-element.ll |
 | llvm/test/CodeGen/AArch64/vec_umulo.ll |
 | llvm/test/CodeGen/AArch64/branch-relax-bcc.ll |
 | llvm/test/CodeGen/AArch64/cmp-select-sign.ll |
 | llvm/test/CodeGen/AArch64/fptoui-sat-vector.ll |
 | llvm/test/CodeGen/AArch64/shift-mod.ll |
 | llvm/test/CodeGen/AArch64/umulo-128-legalisation-lowering.ll |
 | llvm/test/CodeGen/AArch64/named-vector-shuffle-reverse-neon.ll |
 | llvm/test/CodeGen/AArch64/uadd_sat_vec.ll |
 | llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll |
 | llvm/test/CodeGen/AArch64/wineh-try-catch-nobase.ll |
 | llvm/test/CodeGen/AArch64/arm64-neon-copy.ll |
 | llvm/test/CodeGen/AArch64/fptosi-sat-vector.ll |
 | llvm/test/CodeGen/AArch64/sve-insert-vector.ll |
 | llvm/test/CodeGen/AArch64/vecreduce-fmax-legalization.ll |
 | llvm/test/CodeGen/AArch64/ssub_sat_plus.ll |
 | llvm/test/CodeGen/AArch64/implicit-null-check.ll |
 | llvm/test/CodeGen/AArch64/sdivpow2.ll |
 | llvm/test/CodeGen/AArch64/aarch64-dynamic-stack-layout.ll |
 | llvm/test/CodeGen/AArch64/extract-lowbits.ll |
 | llvm/test/CodeGen/AArch64/uaddo.ll |
 | llvm/test/CodeGen/AArch64/machine-licm-sink-instr.ll |
 | llvm/test/CodeGen/AArch64/urem-seteq-illegal-types.ll |
 | llvm/test/CodeGen/AArch64/srem-vector-lkk.ll |
 | llvm/test/CodeGen/AArch64/arm64-fp128.ll |
 | llvm/test/CodeGen/AArch64/GlobalISel/call-translator-variadic-musttail.ll |
 | llvm/test/CodeGen/AArch64/aarch64-tail-dup-size.ll |
 | llvm/test/CodeGen/AArch64/arm64-ccmp.ll |
 | llvm/test/CodeGen/AArch64/sadd_sat_vec.ll |
 | llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic-128.ll |
 | llvm/test/CodeGen/AArch64/arm64-shrink-wrapping.ll |
 | llvm/test/CodeGen/AArch64/sadd_sat_plus.ll |
 | llvm/test/CodeGen/AArch64/aarch64-matrix-umull-smull.ll |
 | llvm/test/CodeGen/AArch64/insert-subvector-res-legalization.ll |
 | llvm/test/CodeGen/AArch64/hoist-and-by-const-from-lshr-in-eqcmp-zero.ll |
 | llvm/test/CodeGen/AArch64/vec_uaddo.ll |
 | llvm/test/CodeGen/AArch64/funnel-shift.ll |
 | llvm/test/CodeGen/AArch64/sadd_sat.ll |
 | llvm/test/tools/UpdateTestChecks/update_llc_test_checks/Inputs/aarch64_generated_funcs.ll.generated.expected |
 | llvm/test/CodeGen/AArch64/atomicrmw-O0.ll |
 | llvm/test/CodeGen/AArch64/ldst-paired-aliasing.ll |
 | llvm/test/CodeGen/AArch64/ssub_sat_vec.ll |
 | llvm/test/CodeGen/AArch64/atomicrmw-xchg-fp.ll |
 | llvm/test/CodeGen/AArch64/logical_shifted_reg.ll |
 | llvm/test/CodeGen/AArch64/pr48188.ll |
 | llvm/test/CodeGen/AArch64/aarch64-load-ext.ll |
 | llvm/test/CodeGen/AArch64/arm64-memset-inline.ll |
 | llvm/test/CodeGen/AArch64/cgp-usubo.ll |
 | llvm/lib/Target/AArch64/MCTargetDesc/AArch64InstPrinter.cpp |
 | llvm/test/CodeGen/AArch64/sve-ld1r.ll |
 | llvm/test/CodeGen/AArch64/branch-relax-cbz.ll |
 | llvm/test/CodeGen/AArch64/named-vector-shuffles-sve.ll |
 | llvm/test/CodeGen/AArch64/fast-isel-sdiv.ll |
 | llvm/test/CodeGen/AArch64/ragreedy-local-interval-cost.ll |
 | llvm/test/CodeGen/AArch64/arm64-atomic-128.ll |
 | llvm/test/CodeGen/AArch64/arm64-abi-varargs.ll |
 | llvm/test/tools/UpdateTestChecks/update_llc_test_checks/Inputs/aarch64_generated_funcs.ll.nogenerated.expected |
 | llvm/test/CodeGen/AArch64/i128_volatile_load_store.ll |
 | llvm/test/CodeGen/AArch64/addsub-constant-folding.ll |
 | llvm/test/CodeGen/AArch64/sve-split-insert-elt.ll |
Commit
f0008a4cf43588ff695c84dbfe3b1ae89640f85c
by frgossen[MLIR] Add `getI8Type` to `OpBuilder`
Differential Revision: https://reviews.llvm.org/D107332
|
 | mlir/lib/IR/Builders.cpp |
 | mlir/include/mlir/IR/Builders.h |