1. [clang] Refactor doc comments to Decls attribution (details)
  2. [ARM] Add MVE beats vector cost model (details)
Commit f31d8df1c8c69e7a787c1c1c529a524f3001c66a by Jan Korous
[clang] Refactor doc comments to Decls attribution
- Create ASTContext::attachCommentsToJustParsedDecls so we don't have to
load external comments in Sema when trying to attach existing comments
to just parsed Decls.
- Keep comments ordered and cache their decomposed location - faster
SourceLoc-based searching.
- Optimize work with redeclarations.
- Keep one comment per redeclaration chain (represented by canonical
Decl) instead of comment per redeclaration.
- For redeclaration chains with no comment attached keep just the last
declaration in chain that had no comment instead of every comment-less
Differential Revision:
llvm-svn: 368732
The file was modifiedclang/lib/Sema/SemaDecl.cpp
The file was modifiedclang/lib/Serialization/ASTReader.cpp
The file was modifiedclang/lib/Serialization/ASTWriter.cpp
The file was addedclang/test/Index/comment-redeclarations.cpp
The file was modifiedclang/include/clang/AST/ASTContext.h
The file was modifiedclang/lib/AST/ASTContext.cpp
The file was modifiedclang/lib/AST/RawCommentList.cpp
The file was modifiedclang/include/clang/AST/RawCommentList.h
Commit a655393f17424c92bc81a5084f3c65fcb361040d by
[ARM] Add MVE beats vector cost model
The MVE architecture has the idea of "beats", where a vector instruction
can be executed over several ticks of the architecture. This adds a
similar system into the Arm backend cost model, multiplying the cost of
all vector instructions by a factor.
This factor essentially becomes the expected difference between scalar
code and vector code, on average. MVE Vector instructions can also
overlap so the a true cost of them is often lower. But equally scalar
instructions can in some situations be dual issued, or have other
optimisations such as unrolling or make use of dsp instructions. The
default is chosen as 2. This should not prevent vectorisation is a most
cases (as the vector instructions will still be doing at least 4 times
the work), but it will help prevent over vectorising in cases where the
benefits are less likely.
This adds things so far to the obvious places in ARMTargetTransformInfo,
and updates a few related costs like not treating float instructions as
cost 2 just because they are floats.
Differential Revision:
llvm-svn: 368733
The file was modifiedllvm/test/Analysis/CostModel/ARM/load_store.ll
The file was modifiedllvm/test/Analysis/CostModel/ARM/select.ll
The file was modifiedllvm/test/Analysis/CostModel/ARM/arith.ll
The file was modifiedllvm/lib/Target/ARM/
The file was modifiedllvm/lib/Target/ARM/ARMTargetTransformInfo.cpp
The file was modifiedllvm/test/Analysis/CostModel/ARM/divrem.ll
The file was modifiedllvm/test/Analysis/CostModel/ARM/shuffle.ll
The file was modifiedllvm/lib/Target/ARM/ARMSubtarget.cpp
The file was modifiedllvm/test/Analysis/CostModel/ARM/cast.ll
The file was modifiedllvm/lib/Target/ARM/ARMSubtarget.h
The file was modifiedllvm/test/Analysis/CostModel/ARM/fparith.ll