Module core::arch::arm[][src]

🔬 This is a nightly-only experimental API. (stdsimd #27731)
This is supported on ARM only.
Expand description

Platform-specific intrinsics for the arm platform.

See the module documentation for more details.

Modules

dspExperimental

References:

Structs

APSRExperimental

Application Program Status Register

ISHExperimental

Inner Shareable is the required shareability domain, reads and writes are the required access types

ISHSTExperimental

Inner Shareable is the required shareability domain, writes are the required access type

NSHExperimental

Non-shareable is the required shareability domain, reads and writes are the required access types

NSHSTExperimental

Non-shareable is the required shareability domain, writes are the required access type

OSHExperimental

Outer Shareable is the required shareability domain, reads and writes are the required access types

OSHSTExperimental

Outer Shareable is the required shareability domain, writes are the required access type

STExperimental

Full system is the required shareability domain, writes are the required access type

SYExperimental

Full system is the required shareability domain, reads and writes are the required access types

float32x2_tExperimental

ARM-specific 64-bit wide vector of two packed f32.

float32x4_tExperimental

ARM-specific 128-bit wide vector of four packed f32.

int8x4_tExperimental

ARM-specific 32-bit wide vector of four packed i8.

int8x8_tExperimental

ARM-specific 64-bit wide vector of eight packed i8.

int8x8x2_tExperimental

ARM-specific type containing two int8x8_t vectors.

int8x8x3_tExperimental

ARM-specific type containing three int8x8_t vectors.

int8x8x4_tExperimental

ARM-specific type containing four int8x8_t vectors.

int8x16_tExperimental

ARM-specific 128-bit wide vector of sixteen packed i8.

int16x2_tExperimental

ARM-specific 32-bit wide vector of two packed i16.

int16x4_tExperimental

ARM-specific 64-bit wide vector of four packed i16.

int16x8_tExperimental

ARM-specific 128-bit wide vector of eight packed i16.

int32x2_tExperimental

ARM-specific 64-bit wide vector of two packed i32.

int32x4_tExperimental

ARM-specific 128-bit wide vector of four packed i32.

int64x1_tExperimental

ARM-specific 64-bit wide vector of one packed i64.

int64x2_tExperimental

ARM-specific 128-bit wide vector of two packed i64.

poly8x8_tExperimental

ARM-specific 64-bit wide polynomial vector of eight packed p8.

poly8x8x2_tExperimental

ARM-specific type containing two poly8x8_t vectors.

poly8x8x3_tExperimental

ARM-specific type containing three poly8x8_t vectors.

poly8x8x4_tExperimental

ARM-specific type containing four poly8x8_t vectors.

poly8x16_tExperimental

ARM-specific 128-bit wide vector of sixteen packed p8.

poly16x4_tExperimental

ARM-specific 64-bit wide vector of four packed p16.

poly16x8_tExperimental

ARM-specific 128-bit wide vector of eight packed p16.

poly64x1_tExperimental

ARM-specific 64-bit wide vector of one packed p64.

poly64x2_tExperimental

ARM-specific 128-bit wide vector of two packed p64.

uint8x4_tExperimental

ARM-specific 32-bit wide vector of four packed u8.

uint8x8_tExperimental

ARM-specific 64-bit wide vector of eight packed u8.

uint8x8x2_tExperimental

ARM-specific type containing two uint8x8_t vectors.

uint8x8x3_tExperimental

ARM-specific type containing three uint8x8_t vectors.

uint8x8x4_tExperimental

ARM-specific type containing four uint8x8_t vectors.

uint8x16_tExperimental

ARM-specific 128-bit wide vector of sixteen packed u8.

uint16x2_tExperimental

ARM-specific 32-bit wide vector of two packed u16.

uint16x4_tExperimental

ARM-specific 64-bit wide vector of four packed u16.

uint16x8_tExperimental

ARM-specific 128-bit wide vector of eight packed u16.

uint32x2_tExperimental

ARM-specific 64-bit wide vector of two packed u32.

uint32x4_tExperimental

ARM-specific 128-bit wide vector of four packed u32.

uint64x1_tExperimental

ARM-specific 64-bit wide vector of one packed u64.

uint64x2_tExperimental

ARM-specific 128-bit wide vector of two packed u64.

Functions

__breakpointExperimental

Inserts a breakpoint instruction.

__clrexExperimental

Removes the exclusive lock created by LDREX

__crc32bExperimentalcrc and v8

CRC32 single round checksum for bytes (8 bits).

__crc32cbExperimentalcrc and v8

CRC32-C single round checksum for bytes (8 bits).

__crc32chExperimentalcrc and v8

CRC32-C single round checksum for half words (16 bits).

__crc32cwExperimentalcrc and v8

CRC32-C single round checksum for words (32 bits).

__crc32hExperimentalcrc and v8

CRC32 single round checksum for half words (16 bits).

__crc32wExperimentalcrc and v8

CRC32 single round checksum for words (32 bits).

__dbgExperimental

Generates a DBG instruction.

__dmbExperimental

Generates a DMB (data memory barrier) instruction or equivalent CP15 instruction.

__dsbExperimental

Generates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction.

__isbExperimental

Generates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction.

__ldrexExperimental

Executes an exclusive LDR instruction for 32 bit value.

__ldrexbExperimental

Executes an exclusive LDR instruction for 8 bit value.

__ldrexhExperimental

Executes an exclusive LDR instruction for 16 bit value.

__nopExperimental

Generates an unspecified no-op instruction.

__qaddExperimental

Signed saturating addition

__qadd8Experimental

Saturating four 8-bit integer additions

__qadd16Experimental

Saturating two 16-bit integer additions

__qasxExperimental

Returns the 16-bit signed saturated equivalent of

__qdblExperimental

Insert a QADD instruction

__qsaxExperimental

Returns the 16-bit signed saturated equivalent of

__qsubExperimental

Signed saturating subtraction

__qsub8Experimental

Saturating two 8-bit integer subtraction

__qsub16Experimental

Saturating two 16-bit integer subtraction

__rsrExperimental

Reads a 32-bit system register

__rsrpExperimental

Reads a system register containing an address

__sadd8Experimental

Returns the 8-bit signed saturated equivalent of

__sadd16Experimental

Returns the 16-bit signed saturated equivalent of

__sasxExperimental

Returns the 16-bit signed equivalent of

__selExperimental

Select bytes from each operand according to APSR GE flags

__sevExperimental

Generates a SEV (send a global event) hint instruction.

__sevlExperimental

Generates a send a local event hint instruction.

__shadd8Experimental

Signed halving parallel byte-wise addition.

__shadd16Experimental

Signed halving parallel halfword-wise addition.

__shsub8Experimental

Signed halving parallel byte-wise subtraction.

__shsub16Experimental

Signed halving parallel halfword-wise subtraction.

__smlabbExperimental

Insert a SMLABB instruction

__smlabtExperimental

Insert a SMLABT instruction

__smladExperimental

Dual 16-bit Signed Multiply with Addition of products and 32-bit accumulation.

__smlatbExperimental

Insert a SMLATB instruction

__smlattExperimental

Insert a SMLATT instruction

__smlawbExperimental

Insert a SMLAWB instruction

__smlawtExperimental

Insert a SMLAWT instruction

__smlsdExperimental

Dual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection.

__smuadExperimental

Signed Dual Multiply Add.

__smuadxExperimental

Signed Dual Multiply Add Reversed.

__smulbbExperimental

Insert a SMULBB instruction

__smulbtExperimental

Insert a SMULTB instruction

__smultbExperimental

Insert a SMULTB instruction

__smulttExperimental

Insert a SMULTT instruction

__smulwbExperimental

Insert a SMULWB instruction

__smulwtExperimental

Insert a SMULWT instruction

__smusdExperimental

Signed Dual Multiply Subtract.

__smusdxExperimental

Signed Dual Multiply Subtract Reversed.

__ssub8Experimental

Inserts a SSUB8 instruction.

__strexExperimental

Executes an exclusive STR instruction for 32 bit values

__strexbExperimental

Executes an exclusive STR instruction for 8 bit values

__usad8Experimental

Sum of 8-bit absolute differences.

__usada8Experimental

Sum of 8-bit absolute differences and constant.

__usub8Experimental

Inserts a USUB8 instruction.

__wfeExperimental

Generates a WFE (wait for event) hint instruction, or nothing.

__wfiExperimental

Generates a WFI (wait for interrupt) hint instruction, or nothing.

__wsrExperimental

Writes a 32-bit system register

__wsrpExperimental

Writes a system register containing an address

__yieldExperimental

Generates a YIELD hint instruction.

_clz_u8Experimentalv7

Count Leading Zeros.

_clz_u16Experimentalv7

Count Leading Zeros.

_clz_u32Experimentalv7

Count Leading Zeros.

_rbit_u32Experimentalv7

Reverse the bit order.

_rev_u16Experimental

Reverse the order of the bytes.

_rev_u32Experimental

Reverse the order of the bytes.

udfExperimental

Generates the trap instruction UDF

vaba_s8Experimentalneon and v7
vaba_s16Experimentalneon and v7
vaba_s32Experimentalneon and v7
vaba_u8Experimentalneon and v7
vaba_u16Experimentalneon and v7
vaba_u32Experimentalneon and v7
vabal_s8Experimentalneon and v7

Signed Absolute difference and Accumulate Long

vabal_s16Experimentalneon and v7

Signed Absolute difference and Accumulate Long

vabal_s32Experimentalneon and v7

Signed Absolute difference and Accumulate Long

vabal_u8Experimentalneon and v7

Unsigned Absolute difference and Accumulate Long

vabal_u16Experimentalneon and v7

Unsigned Absolute difference and Accumulate Long

vabal_u32Experimentalneon and v7

Unsigned Absolute difference and Accumulate Long

vabaq_s8Experimentalneon and v7
vabaq_s16Experimentalneon and v7
vabaq_s32Experimentalneon and v7
vabaq_u8Experimentalneon and v7
vabaq_u16Experimentalneon and v7
vabaq_u32Experimentalneon and v7
vabd_f32Experimentalneon and v7

Absolute difference between the arguments of Floating

vabd_s8Experimentalneon and v7

Absolute difference between the arguments

vabd_s16Experimentalneon and v7

Absolute difference between the arguments

vabd_s32Experimentalneon and v7

Absolute difference between the arguments

vabd_u8Experimentalneon and v7

Absolute difference between the arguments

vabd_u16Experimentalneon and v7

Absolute difference between the arguments

vabd_u32Experimentalneon and v7

Absolute difference between the arguments

vabdl_s8Experimentalneon and v7

Signed Absolute difference Long

vabdl_s16Experimentalneon and v7

Signed Absolute difference Long

vabdl_s32Experimentalneon and v7

Signed Absolute difference Long

vabdl_u8Experimentalneon and v7

Unsigned Absolute difference Long

vabdl_u16Experimentalneon and v7

Unsigned Absolute difference Long

vabdl_u32Experimentalneon and v7

Unsigned Absolute difference Long

vabdq_f32Experimentalneon and v7

Absolute difference between the arguments of Floating

vabdq_s8Experimentalneon and v7

Absolute difference between the arguments

vabdq_s16Experimentalneon and v7

Absolute difference between the arguments

vabdq_s32Experimentalneon and v7

Absolute difference between the arguments

vabdq_u8Experimentalneon and v7

Absolute difference between the arguments

vabdq_u16Experimentalneon and v7

Absolute difference between the arguments

vabdq_u32Experimentalneon and v7

Absolute difference between the arguments

vabs_f32Experimentalneon and v7

Floating-point absolute value

vabs_s8Experimentalneon and v7

Absolute value (wrapping).

vabs_s16Experimentalneon and v7

Absolute value (wrapping).

vabs_s32Experimentalneon and v7

Absolute value (wrapping).

vabsq_f32Experimentalneon and v7

Floating-point absolute value

vabsq_s8Experimentalneon and v7

Absolute value (wrapping).

vabsq_s16Experimentalneon and v7

Absolute value (wrapping).

vabsq_s32Experimentalneon and v7

Absolute value (wrapping).

vadd_f32Experimentalneon and v7

Vector add.

vadd_s8Experimentalneon and v7

Vector add.

vadd_s16Experimentalneon and v7

Vector add.

vadd_s32Experimentalneon and v7

Vector add.

vadd_u8Experimentalneon and v7

Vector add.

vadd_u16Experimentalneon and v7

Vector add.

vadd_u32Experimentalneon and v7

Vector add.

vaddhn_high_s16Experimentalneon and v7

Add returning High Narrow (high half).

vaddhn_high_s32Experimentalneon and v7

Add returning High Narrow (high half).

vaddhn_high_s64Experimentalneon and v7

Add returning High Narrow (high half).

vaddhn_high_u16Experimentalneon and v7

Add returning High Narrow (high half).

vaddhn_high_u32Experimentalneon and v7

Add returning High Narrow (high half).

vaddhn_high_u64Experimentalneon and v7

Add returning High Narrow (high half).

vaddhn_s16Experimentalneon and v7

Add returning High Narrow.

vaddhn_s32Experimentalneon and v7

Add returning High Narrow.

vaddhn_s64Experimentalneon and v7

Add returning High Narrow.

vaddhn_u16Experimentalneon and v7

Add returning High Narrow.

vaddhn_u32Experimentalneon and v7

Add returning High Narrow.

vaddhn_u64Experimentalneon and v7

Add returning High Narrow.

vaddl_high_s8Experimentalneon and v7

Signed Add Long (vector, high half).

vaddl_high_s16Experimentalneon and v7

Signed Add Long (vector, high half).

vaddl_high_s32Experimentalneon and v7

Signed Add Long (vector, high half).

vaddl_high_u8Experimentalneon and v7

Unsigned Add Long (vector, high half).

vaddl_high_u16Experimentalneon and v7

Unsigned Add Long (vector, high half).

vaddl_high_u32Experimentalneon and v7

Unsigned Add Long (vector, high half).

vaddl_s8Experimentalneon and v7

Signed Add Long (vector).

vaddl_s16Experimentalneon and v7

Signed Add Long (vector).

vaddl_s32Experimentalneon and v7

Signed Add Long (vector).

vaddl_u8Experimentalneon and v7

Unsigned Add Long (vector).

vaddl_u16Experimentalneon and v7

Unsigned Add Long (vector).

vaddl_u32Experimentalneon and v7

Unsigned Add Long (vector).

vaddq_f32Experimentalneon and v7

Vector add.

vaddq_s8Experimentalneon and v7

Vector add.

vaddq_s16Experimentalneon and v7

Vector add.

vaddq_s32Experimentalneon and v7

Vector add.

vaddq_s64Experimentalneon and v7

Vector add.

vaddq_u8Experimentalneon and v7

Vector add.

vaddq_u16Experimentalneon and v7

Vector add.

vaddq_u32Experimentalneon and v7

Vector add.

vaddq_u64Experimentalneon and v7

Vector add.

vaddw_high_s8Experimentalneon and v7

Signed Add Wide (high half).

vaddw_high_s16Experimentalneon and v7

Signed Add Wide (high half).

vaddw_high_s32Experimentalneon and v7

Signed Add Wide (high half).

vaddw_high_u8Experimentalneon and v7

Unsigned Add Wide (high half).

vaddw_high_u16Experimentalneon and v7

Unsigned Add Wide (high half).

vaddw_high_u32Experimentalneon and v7

Unsigned Add Wide (high half).

vaddw_s8Experimentalneon and v7

Signed Add Wide.

vaddw_s16Experimentalneon and v7

Signed Add Wide.

vaddw_s32Experimentalneon and v7

Signed Add Wide.

vaddw_u8Experimentalneon and v7

Unsigned Add Wide.

vaddw_u16Experimentalneon and v7

Unsigned Add Wide.

vaddw_u32Experimentalneon and v7

Unsigned Add Wide.

vaesdq_u8Experimentalcrypto,v8

AES single round decryption.

vaeseq_u8Experimentalcrypto,v8

AES single round encryption.

vaesimcq_u8Experimentalcrypto,v8

AES inverse mix columns.

vaesmcq_u8Experimentalcrypto,v8

AES mix columns.

vand_s8Experimentalneon and v7

Vector bitwise and

vand_s16Experimentalneon and v7

Vector bitwise and

vand_s32Experimentalneon and v7

Vector bitwise and

vand_s64Experimentalneon and v7

Vector bitwise and

vand_u8Experimentalneon and v7

Vector bitwise and

vand_u16Experimentalneon and v7

Vector bitwise and

vand_u32Experimentalneon and v7

Vector bitwise and

vand_u64Experimentalneon and v7

Vector bitwise and

vandq_s8Experimentalneon and v7

Vector bitwise and

vandq_s16Experimentalneon and v7

Vector bitwise and

vandq_s32Experimentalneon and v7

Vector bitwise and

vandq_s64Experimentalneon and v7

Vector bitwise and

vandq_u8Experimentalneon and v7

Vector bitwise and

vandq_u16Experimentalneon and v7

Vector bitwise and

vandq_u32Experimentalneon and v7

Vector bitwise and

vandq_u64Experimentalneon and v7

Vector bitwise and

vbic_s8Experimentalneon and v7

Vector bitwise bit clear

vbic_s16Experimentalneon and v7

Vector bitwise bit clear

vbic_s32Experimentalneon and v7

Vector bitwise bit clear

vbic_s64Experimentalneon and v7

Vector bitwise bit clear

vbic_u8Experimentalneon and v7

Vector bitwise bit clear

vbic_u16Experimentalneon and v7

Vector bitwise bit clear

vbic_u32Experimentalneon and v7

Vector bitwise bit clear

vbic_u64Experimentalneon and v7

Vector bitwise bit clear

vbicq_s8Experimentalneon and v7

Vector bitwise bit clear

vbicq_s16Experimentalneon and v7

Vector bitwise bit clear

vbicq_s32Experimentalneon and v7

Vector bitwise bit clear

vbicq_s64Experimentalneon and v7

Vector bitwise bit clear

vbicq_u8Experimentalneon and v7

Vector bitwise bit clear

vbicq_u16Experimentalneon and v7

Vector bitwise bit clear

vbicq_u32Experimentalneon and v7

Vector bitwise bit clear

vbicq_u64Experimentalneon and v7

Vector bitwise bit clear

vbsl_f32Experimentalneon and v7

Bitwise Select.

vbsl_p8Experimentalneon and v7

Bitwise Select.

vbsl_p16Experimentalneon and v7

Bitwise Select.

vbsl_s8Experimentalneon and v7

Bitwise Select instructions. This instruction sets each bit in the destination SIMD&FP register to the corresponding bit from the first source SIMD&FP register when the original destination bit was 1, otherwise from the second source SIMD&FP register. Bitwise Select.

vbsl_s16Experimentalneon and v7

Bitwise Select.

vbsl_s32Experimentalneon and v7

Bitwise Select.

vbsl_s64Experimentalneon and v7

Bitwise Select.

vbsl_u8Experimentalneon and v7

Bitwise Select.

vbsl_u16Experimentalneon and v7

Bitwise Select.

vbsl_u32Experimentalneon and v7

Bitwise Select.

vbsl_u64Experimentalneon and v7

Bitwise Select.

vbslq_f32Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_p8Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_p16Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_s8Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_s16Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_s32Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_s64Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_u8Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_u16Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_u32Experimentalneon and v7

Bitwise Select. (128-bit)

vbslq_u64Experimentalneon and v7

Bitwise Select. (128-bit)

vcage_f32Experimentalneon and v7

Floating-point absolute compare greater than or equal

vcageq_f32Experimentalneon and v7

Floating-point absolute compare greater than or equal

vcagt_f32Experimentalneon and v7

Floating-point absolute compare greater than

vcagtq_f32Experimentalneon and v7

Floating-point absolute compare greater than

vcale_f32Experimentalneon and v7

Floating-point absolute compare less than or equal

vcaleq_f32Experimentalneon and v7

Floating-point absolute compare less than or equal

vcalt_f32Experimentalneon and v7

Floating-point absolute compare less than

vcaltq_f32Experimentalneon and v7

Floating-point absolute compare less than

vceq_f32Experimentalneon and v7

Floating-point compare equal

vceq_p8Experimentalneon and v7

Compare bitwise Equal (vector)

vceq_s8Experimentalneon and v7

Compare bitwise Equal (vector)

vceq_s16Experimentalneon and v7

Compare bitwise Equal (vector)

vceq_s32Experimentalneon and v7

Compare bitwise Equal (vector)

vceq_u8Experimentalneon and v7

Compare bitwise Equal (vector)

vceq_u16Experimentalneon and v7

Compare bitwise Equal (vector)

vceq_u32Experimentalneon and v7

Compare bitwise Equal (vector)

vceqq_f32Experimentalneon and v7

Floating-point compare equal

vceqq_p8Experimentalneon and v7

Compare bitwise Equal (vector)

vceqq_s8Experimentalneon and v7

Compare bitwise Equal (vector)

vceqq_s16Experimentalneon and v7

Compare bitwise Equal (vector)

vceqq_s32Experimentalneon and v7

Compare bitwise Equal (vector)

vceqq_u8Experimentalneon and v7

Compare bitwise Equal (vector)

vceqq_u16Experimentalneon and v7

Compare bitwise Equal (vector)

vceqq_u32Experimentalneon and v7

Compare bitwise Equal (vector)

vcge_f32Experimentalneon and v7

Floating-point compare greater than or equal

vcge_s8Experimentalneon and v7

Compare signed greater than or equal

vcge_s16Experimentalneon and v7

Compare signed greater than or equal

vcge_s32Experimentalneon and v7

Compare signed greater than or equal

vcge_u8Experimentalneon and v7

Compare unsigned greater than or equal

vcge_u16Experimentalneon and v7

Compare unsigned greater than or equal

vcge_u32Experimentalneon and v7

Compare unsigned greater than or equal

vcgeq_f32Experimentalneon and v7

Floating-point compare greater than or equal

vcgeq_s8Experimentalneon and v7

Compare signed greater than or equal

vcgeq_s16Experimentalneon and v7

Compare signed greater than or equal

vcgeq_s32Experimentalneon and v7

Compare signed greater than or equal

vcgeq_u8Experimentalneon and v7

Compare unsigned greater than or equal

vcgeq_u16Experimentalneon and v7

Compare unsigned greater than or equal

vcgeq_u32Experimentalneon and v7

Compare unsigned greater than or equal

vcgt_f32Experimentalneon and v7

Floating-point compare greater than

vcgt_s8Experimentalneon and v7

Compare signed greater than

vcgt_s16Experimentalneon and v7

Compare signed greater than

vcgt_s32Experimentalneon and v7

Compare signed greater than

vcgt_u8Experimentalneon and v7

Compare unsigned highe

vcgt_u16Experimentalneon and v7

Compare unsigned highe

vcgt_u32Experimentalneon and v7

Compare unsigned highe

vcgtq_f32Experimentalneon and v7

Floating-point compare greater than

vcgtq_s8Experimentalneon and v7

Compare signed greater than

vcgtq_s16Experimentalneon and v7

Compare signed greater than

vcgtq_s32Experimentalneon and v7

Compare signed greater than

vcgtq_u8Experimentalneon and v7

Compare unsigned highe

vcgtq_u16Experimentalneon and v7

Compare unsigned highe

vcgtq_u32Experimentalneon and v7

Compare unsigned highe

vcle_f32Experimentalneon and v7

Floating-point compare less than or equal

vcle_s8Experimentalneon and v7

Compare signed less than or equal

vcle_s16Experimentalneon and v7

Compare signed less than or equal

vcle_s32Experimentalneon and v7

Compare signed less than or equal

vcle_u8Experimentalneon and v7

Compare unsigned less than or equal

vcle_u16Experimentalneon and v7

Compare unsigned less than or equal

vcle_u32Experimentalneon and v7

Compare unsigned less than or equal

vcleq_f32Experimentalneon and v7

Floating-point compare less than or equal

vcleq_s8Experimentalneon and v7

Compare signed less than or equal

vcleq_s16Experimentalneon and v7

Compare signed less than or equal

vcleq_s32Experimentalneon and v7

Compare signed less than or equal

vcleq_u8Experimentalneon and v7

Compare unsigned less than or equal

vcleq_u16Experimentalneon and v7

Compare unsigned less than or equal

vcleq_u32Experimentalneon and v7

Compare unsigned less than or equal

vcls_s8Experimentalneon and v7

Count leading sign bits

vcls_s16Experimentalneon and v7

Count leading sign bits

vcls_s32Experimentalneon and v7

Count leading sign bits

vclsq_s8Experimentalneon and v7

Count leading sign bits

vclsq_s16Experimentalneon and v7

Count leading sign bits

vclsq_s32Experimentalneon and v7

Count leading sign bits

vclt_f32Experimentalneon and v7

Floating-point compare less than

vclt_s8Experimentalneon and v7

Compare signed less than

vclt_s16Experimentalneon and v7

Compare signed less than

vclt_s32Experimentalneon and v7

Compare signed less than

vclt_u8Experimentalneon and v7

Compare unsigned less than

vclt_u16Experimentalneon and v7

Compare unsigned less than

vclt_u32Experimentalneon and v7

Compare unsigned less than

vcltq_f32Experimentalneon and v7

Floating-point compare less than

vcltq_s8Experimentalneon and v7

Compare signed less than

vcltq_s16Experimentalneon and v7

Compare signed less than

vcltq_s32Experimentalneon and v7

Compare signed less than

vcltq_u8Experimentalneon and v7

Compare unsigned less than

vcltq_u16Experimentalneon and v7

Compare unsigned less than

vcltq_u32Experimentalneon and v7

Compare unsigned less than

vclz_s8Experimentalneon and v7

Signed count leading sign bits

vclz_s16Experimentalneon and v7

Signed count leading sign bits

vclz_s32Experimentalneon and v7

Signed count leading sign bits

vclz_u8Experimentalneon and v7

Unsigned count leading sign bits

vclz_u16Experimentalneon and v7

Unsigned count leading sign bits

vclz_u32Experimentalneon and v7

Unsigned count leading sign bits

vclzq_s8Experimentalneon and v7

Signed count leading sign bits

vclzq_s16Experimentalneon and v7

Signed count leading sign bits

vclzq_s32Experimentalneon and v7

Signed count leading sign bits

vclzq_u8Experimentalneon and v7

Unsigned count leading sign bits

vclzq_u16Experimentalneon and v7

Unsigned count leading sign bits

vclzq_u32Experimentalneon and v7

Unsigned count leading sign bits

vcnt_p8Experimentalneon and v7

Population count per byte.

vcnt_s8Experimentalneon and v7

Population count per byte.

vcnt_u8Experimentalneon and v7

Population count per byte.

vcntq_p8Experimentalneon and v7

Population count per byte.

vcntq_s8Experimentalneon and v7

Population count per byte.

vcntq_u8Experimentalneon and v7

Population count per byte.

vcreate_f32Experimentalneon and v7

Insert vector element from another vector element

vcreate_p8Experimentalneon and v7

Insert vector element from another vector element

vcreate_p16Experimentalneon and v7

Insert vector element from another vector element

vcreate_p64Experimentalneon,aes and crypto,v8

Insert vector element from another vector element

vcreate_s8Experimentalneon and v7

Insert vector element from another vector element

vcreate_s32Experimentalneon and v7

Insert vector element from another vector element

vcreate_s64Experimentalneon and v7

Insert vector element from another vector element

vcreate_u8Experimentalneon and v7

Insert vector element from another vector element

vcreate_u32Experimentalneon and v7

Insert vector element from another vector element

vcreate_u64Experimentalneon and v7

Insert vector element from another vector element

vcvt_f32_s32Experimentalneon and v7

Fixed-point convert to floating-point

vcvt_f32_u32Experimentalneon and v7

Fixed-point convert to floating-point

vcvt_n_f32_s32Experimentalneon,v7

Fixed-point convert to floating-point

vcvt_n_f32_u32Experimentalneon,v7

Fixed-point convert to floating-point

vcvt_n_s32_f32Experimentalneon,v7

Floating-point convert to fixed-point, rounding toward zero

vcvt_n_u32_f32Experimentalneon,v7

Floating-point convert to fixed-point, rounding toward zero

vcvt_s32_f32Experimentalneon and v7

Floating-point convert to signed fixed-point, rounding toward zero

vcvt_u32_f32Experimentalneon and v7

Floating-point convert to unsigned fixed-point, rounding toward zero

vcvtq_f32_s32Experimentalneon and v7

Fixed-point convert to floating-point

vcvtq_f32_u32Experimentalneon and v7

Fixed-point convert to floating-point

vcvtq_n_f32_s32Experimentalneon,v7

Fixed-point convert to floating-point

vcvtq_n_f32_u32Experimentalneon,v7

Fixed-point convert to floating-point

vcvtq_n_s32_f32Experimentalneon,v7

Floating-point convert to fixed-point, rounding toward zero

vcvtq_n_u32_f32Experimentalneon,v7

Floating-point convert to fixed-point, rounding toward zero

vcvtq_s32_f32Experimentalneon and v7

Floating-point convert to signed fixed-point, rounding toward zero

vcvtq_s32_f32Experimentalneon and v7

Floating-point Convert to Signed fixed-point, rounding toward Zero (vector)

vcvtq_u32_f32Experimentalneon and v7

Floating-point convert to unsigned fixed-point, rounding toward zero

vcvtq_u32_f32Experimentalneon and v7

Floating-point Convert to Unsigned fixed-point, rounding toward Zero (vector)

vdup_lane_f32Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_p8Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_p16Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_s8Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_s16Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_s32Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_s64Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_u8Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_u16Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_u32Experimentalneon and v7

Set all vector lanes to the same value

vdup_lane_u64Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_f32Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_p8Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_p16Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_s8Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_s16Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_s32Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_s64Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_u8Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_u16Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_u32Experimentalneon and v7

Set all vector lanes to the same value

vdup_laneq_u64Experimentalneon and v7

Set all vector lanes to the same value

vdup_n_f32Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_p8Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_p16Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_s8Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_s16Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_s32Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_s64Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_u8Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_u16Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_u32Experimentalneon and v7

Duplicate vector element to vector or scalar

vdup_n_u64Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_lane_f32Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_p8Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_p16Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_s8Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_s16Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_s32Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_s64Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_u8Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_u16Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_u32Experimentalneon and v7

Set all vector lanes to the same value

vdupq_lane_u64Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_f32Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_p8Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_p16Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_s8Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_s16Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_s32Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_s64Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_u8Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_u16Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_u32Experimentalneon and v7

Set all vector lanes to the same value

vdupq_laneq_u64Experimentalneon and v7

Set all vector lanes to the same value

vdupq_n_f32Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_p8Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_p16Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_s8Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_s16Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_s32Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_s64Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_u8Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_u16Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_u32Experimentalneon and v7

Duplicate vector element to vector or scalar

vdupq_n_u64Experimentalneon and v7

Duplicate vector element to vector or scalar

veor_s8Experimentalneon and v7

Vector bitwise exclusive or (vector)

veor_s16Experimentalneon and v7

Vector bitwise exclusive or (vector)

veor_s32Experimentalneon and v7

Vector bitwise exclusive or (vector)

veor_s64Experimentalneon and v7

Vector bitwise exclusive or (vector)

veor_u8Experimentalneon and v7

Vector bitwise exclusive or (vector)

veor_u16Experimentalneon and v7

Vector bitwise exclusive or (vector)

veor_u32Experimentalneon and v7

Vector bitwise exclusive or (vector)

veor_u64Experimentalneon and v7

Vector bitwise exclusive or (vector)

veorq_s8Experimentalneon and v7

Vector bitwise exclusive or (vector)

veorq_s16Experimentalneon and v7

Vector bitwise exclusive or (vector)

veorq_s32Experimentalneon and v7

Vector bitwise exclusive or (vector)

veorq_s64Experimentalneon and v7

Vector bitwise exclusive or (vector)

veorq_u8Experimentalneon and v7

Vector bitwise exclusive or (vector)

veorq_u16Experimentalneon and v7

Vector bitwise exclusive or (vector)

veorq_u32Experimentalneon and v7

Vector bitwise exclusive or (vector)

veorq_u64Experimentalneon and v7

Vector bitwise exclusive or (vector)

vext_f32Experimentalneon and v7

Extract vector from pair of vectors

vext_p8Experimentalneon and v7

Extract vector from pair of vectors

vext_p16Experimentalneon and v7

Extract vector from pair of vectors

vext_s8Experimentalneon and v7

Extract vector from pair of vectors

vext_s16Experimentalneon and v7

Extract vector from pair of vectors

vext_s32Experimentalneon and v7

Extract vector from pair of vectors

vext_s64Experimentalneon and v7

Extract vector from pair of vectors

vext_u8Experimentalneon and v7

Extract vector from pair of vectors

vext_u16Experimentalneon and v7

Extract vector from pair of vectors

vext_u32Experimentalneon and v7

Extract vector from pair of vectors

vext_u64Experimentalneon and v7

Extract vector from pair of vectors

vextq_f32Experimentalneon and v7

Extract vector from pair of vectors

vextq_p8Experimentalneon and v7

Extract vector from pair of vectors

vextq_p16Experimentalneon and v7

Extract vector from pair of vectors

vextq_s8Experimentalneon and v7

Extract vector from pair of vectors

vextq_s16Experimentalneon and v7

Extract vector from pair of vectors

vextq_s32Experimentalneon and v7

Extract vector from pair of vectors

vextq_s64Experimentalneon and v7

Extract vector from pair of vectors

vextq_u8Experimentalneon and v7

Extract vector from pair of vectors

vextq_u16Experimentalneon and v7

Extract vector from pair of vectors

vextq_u32Experimentalneon and v7

Extract vector from pair of vectors

vextq_u64Experimentalneon and v7

Extract vector from pair of vectors

vfma_f32Experimentalneon and fp-armv8,v8

Floating-point fused Multiply-Add to accumulator(vector)

vfma_n_f32Experimentalneon and fp-armv8,v8

Floating-point fused Multiply-Add to accumulator(vector)

vfmaq_f32Experimentalneon and fp-armv8,v8

Floating-point fused Multiply-Add to accumulator(vector)

vfmaq_n_f32Experimentalneon and fp-armv8,v8

Floating-point fused Multiply-Add to accumulator(vector)

vfms_f32Experimentalneon and fp-armv8,v8

Floating-point fused multiply-subtract from accumulator

vfms_n_f32Experimentalneon and fp-armv8,v8

Floating-point fused Multiply-subtract to accumulator(vector)

vfmsq_f32Experimentalneon and fp-armv8,v8

Floating-point fused multiply-subtract from accumulator

vfmsq_n_f32Experimentalneon and fp-armv8,v8

Floating-point fused Multiply-subtract to accumulator(vector)

vget_high_f32Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_p8Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_p16Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_s8Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_s16Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_s32Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_s64Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_u8Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_u16Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_u32Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_high_u64Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_lane_f32Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_lane_p8Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_p16Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_p64Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_s8Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_s16Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_s32Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_s64Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_u8Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_u16Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_u32Experimentalneon and v7

Move vector element to general-purpose register

vget_lane_u64Experimentalneon and v7

Move vector element to general-purpose register

vget_low_f32Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_p8Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_p16Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_s8Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_s16Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_s32Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_s64Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_u8Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_u16Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_u32Experimentalneon and v7

Duplicate vector element to vector or scalar

vget_low_u64Experimentalneon and v7

Duplicate vector element to vector or scalar

vgetq_lane_f32Experimentalneon and v7

Duplicate vector element to vector or scalar

vgetq_lane_p8Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_p16Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_p64Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_s8Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_s16Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_s32Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_s64Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_u8Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_u16Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_u32Experimentalneon and v7

Move vector element to general-purpose register

vgetq_lane_u64Experimentalneon and v7

Move vector element to general-purpose register

vhadd_s8Experimentalneon and v7

Halving add

vhadd_s16Experimentalneon and v7

Halving add

vhadd_s32Experimentalneon and v7

Halving add

vhadd_u8Experimentalneon and v7

Halving add

vhadd_u16Experimentalneon and v7

Halving add

vhadd_u32Experimentalneon and v7

Halving add

vhaddq_s8Experimentalneon and v7

Halving add

vhaddq_s16Experimentalneon and v7

Halving add

vhaddq_s32Experimentalneon and v7

Halving add

vhaddq_u8Experimentalneon and v7

Halving add

vhaddq_u16Experimentalneon and v7

Halving add

vhaddq_u32Experimentalneon and v7

Halving add

vhsub_s8Experimentalneon and v7

Signed halving subtract

vhsub_s16Experimentalneon and v7

Signed halving subtract

vhsub_s32Experimentalneon and v7

Signed halving subtract

vhsub_u8Experimentalneon and v7

Signed halving subtract

vhsub_u16Experimentalneon and v7

Signed halving subtract

vhsub_u32Experimentalneon and v7

Signed halving subtract

vhsubq_s8Experimentalneon and v7

Signed halving subtract

vhsubq_s16Experimentalneon and v7

Signed halving subtract

vhsubq_s32Experimentalneon and v7

Signed halving subtract

vhsubq_u8Experimentalneon and v7

Signed halving subtract

vhsubq_u16Experimentalneon and v7

Signed halving subtract

vhsubq_u32Experimentalneon and v7

Signed halving subtract

vld1_dup_f32Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_p8Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_p16Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_s8Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_s16Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_s32Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_s64Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_u8Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_u16Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_u32Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_u64Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1_f32Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_lane_f32Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_p8Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_p16Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_s8Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_s16Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_s32Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_s64Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_u8Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_u16Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_u32Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_lane_u64Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1_p8Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_p16Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_s8Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_s16Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_s32Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_s64Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_u8Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_u16Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_u32Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1_u64Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_dup_f32Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_p8Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_p16Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_s8Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_s16Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_s32Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_s64Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_u8Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_u16Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_u32Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_u64Experimentalneon and v7

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_f32Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_lane_f32Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_p8Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_p16Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_s8Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_s16Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_s32Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_s64Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_u8Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_u16Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_u32Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_lane_u64Experimentalneon and v7

Load one single-element structure to one lane of one register.

vld1q_p8Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_p16Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_s8Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_s16Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_s32Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_s64Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_u8Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_u16Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_u32Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vld1q_u64Experimentalneon,v7

Load multiple single-element structures to one, two, three, or four registers.

vmax_f32Experimentalneon and v7

Maximum (vector)

vmax_s8Experimentalneon and v7

Maximum (vector)

vmax_s16Experimentalneon and v7

Maximum (vector)

vmax_s32Experimentalneon and v7

Maximum (vector)

vmax_u8Experimentalneon and v7

Maximum (vector)

vmax_u16Experimentalneon and v7

Maximum (vector)

vmax_u32Experimentalneon and v7

Maximum (vector)

vmaxnm_f32Experimentalneon and fp-armv8,v8

Floating-point Maximun Number (vector)

vmaxnmq_f32Experimentalneon and fp-armv8,v8

Floating-point Maximun Number (vector)

vmaxq_f32Experimentalneon and v7

Maximum (vector)

vmaxq_s8Experimentalneon and v7

Maximum (vector)

vmaxq_s16Experimentalneon and v7

Maximum (vector)

vmaxq_s32Experimentalneon and v7

Maximum (vector)

vmaxq_u8Experimentalneon and v7

Maximum (vector)

vmaxq_u16Experimentalneon and v7

Maximum (vector)

vmaxq_u32Experimentalneon and v7

Maximum (vector)

vmin_f32Experimentalneon and v7

Minimum (vector)

vmin_s8Experimentalneon and v7

Minimum (vector)

vmin_s16Experimentalneon and v7

Minimum (vector)

vmin_s32Experimentalneon and v7

Minimum (vector)

vmin_u8Experimentalneon and v7

Minimum (vector)

vmin_u16Experimentalneon and v7

Minimum (vector)

vmin_u32Experimentalneon and v7

Minimum (vector)

vminnm_f32Experimentalneon and fp-armv8,v8

Floating-point Minimun Number (vector)

vminnmq_f32Experimentalneon and fp-armv8,v8

Floating-point Minimun Number (vector)

vminq_f32Experimentalneon and v7

Minimum (vector)

vminq_s8Experimentalneon and v7

Minimum (vector)

vminq_s16Experimentalneon and v7

Minimum (vector)

vminq_s32Experimentalneon and v7

Minimum (vector)

vminq_u8Experimentalneon and v7

Minimum (vector)

vminq_u16Experimentalneon and v7

Minimum (vector)

vminq_u32Experimentalneon and v7

Minimum (vector)

vmla_f32Experimentalneon and v7

Floating-point multiply-add to accumulator

vmla_lane_f32Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_lane_s16Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_lane_s32Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_lane_u16Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_lane_u32Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_laneq_f32Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_laneq_s16Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_laneq_s32Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_laneq_u16Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_laneq_u32Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_n_f32Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_n_s16Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_n_s32Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_n_u16Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_n_u32Experimentalneon and v7

Vector multiply accumulate with scalar

vmla_s8Experimentalneon and v7

Multiply-add to accumulator

vmla_s16Experimentalneon and v7

Multiply-add to accumulator

vmla_s32Experimentalneon and v7

Multiply-add to accumulator

vmla_u8Experimentalneon and v7

Multiply-add to accumulator

vmla_u16Experimentalneon and v7

Multiply-add to accumulator

vmla_u32Experimentalneon and v7

Multiply-add to accumulator

vmlal_lane_s16Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_lane_s32Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_lane_u16Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_lane_u32Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_laneq_s16Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_laneq_s32Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_laneq_u16Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_laneq_u32Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_n_s16Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_n_s32Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_n_u16Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_n_u32Experimentalneon and v7

Vector widening multiply accumulate with scalar

vmlal_s8Experimentalneon and v7

Signed multiply-add long

vmlal_s16Experimentalneon and v7

Signed multiply-add long

vmlal_s32Experimentalneon and v7

Signed multiply-add long

vmlal_u8Experimentalneon and v7

Unsigned multiply-add long

vmlal_u16Experimentalneon and v7

Unsigned multiply-add long

vmlal_u32Experimentalneon and v7

Unsigned multiply-add long

vmlaq_f32Experimentalneon and v7

Floating-point multiply-add to accumulator

vmlaq_lane_f32Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_lane_s16Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_lane_s32Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_lane_u16Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_lane_u32Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_laneq_f32Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_laneq_s16Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_laneq_s32Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_laneq_u16Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_laneq_u32Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_n_f32Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_n_s16Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_n_s32Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_n_u16Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_n_u32Experimentalneon and v7

Vector multiply accumulate with scalar

vmlaq_s8Experimentalneon and v7

Multiply-add to accumulator

vmlaq_s16Experimentalneon and v7

Multiply-add to accumulator

vmlaq_s32Experimentalneon and v7

Multiply-add to accumulator

vmlaq_u8Experimentalneon and v7

Multiply-add to accumulator

vmlaq_u16Experimentalneon and v7

Multiply-add to accumulator

vmlaq_u32Experimentalneon and v7

Multiply-add to accumulator

vmls_f32Experimentalneon and v7

Floating-point multiply-subtract from accumulator

vmls_lane_f32Experimentalneon and v7

Vector multiply subtract with scalar

vmls_lane_s16Experimentalneon and v7

Vector multiply subtract with scalar

vmls_lane_s32Experimentalneon and v7

Vector multiply subtract with scalar

vmls_lane_u16Experimentalneon and v7

Vector multiply subtract with scalar

vmls_lane_u32Experimentalneon and v7

Vector multiply subtract with scalar

vmls_laneq_f32Experimentalneon and v7

Vector multiply subtract with scalar

vmls_laneq_s16Experimentalneon and v7

Vector multiply subtract with scalar

vmls_laneq_s32Experimentalneon and v7

Vector multiply subtract with scalar

vmls_laneq_u16Experimentalneon and v7

Vector multiply subtract with scalar

vmls_laneq_u32Experimentalneon and v7

Vector multiply subtract with scalar

vmls_n_f32Experimentalneon and v7

Vector multiply subtract with scalar

vmls_n_s16Experimentalneon and v7

Vector multiply subtract with scalar

vmls_n_s32Experimentalneon and v7

Vector multiply subtract with scalar

vmls_n_u16Experimentalneon and v7

Vector multiply subtract with scalar

vmls_n_u32Experimentalneon and v7

Vector multiply subtract with scalar

vmls_s8Experimentalneon and v7

Multiply-subtract from accumulator

vmls_s16Experimentalneon and v7

Multiply-subtract from accumulator

vmls_s32Experimentalneon and v7

Multiply-subtract from accumulator

vmls_u8Experimentalneon and v7

Multiply-subtract from accumulator

vmls_u16Experimentalneon and v7

Multiply-subtract from accumulator

vmls_u32Experimentalneon and v7

Multiply-subtract from accumulator

vmlsl_lane_s16Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_lane_s32Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_lane_u16Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_lane_u32Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_laneq_s16Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_laneq_s32Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_laneq_u16Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_laneq_u32Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_n_s16Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_n_s32Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_n_u16Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_n_u32Experimentalneon and v7

Vector widening multiply subtract with scalar

vmlsl_s8Experimentalneon and v7

Signed multiply-subtract long

vmlsl_s16Experimentalneon and v7

Signed multiply-subtract long

vmlsl_s32Experimentalneon and v7

Signed multiply-subtract long

vmlsl_u8Experimentalneon and v7

Unsigned multiply-subtract long

vmlsl_u16Experimentalneon and v7

Unsigned multiply-subtract long

vmlsl_u32Experimentalneon and v7

Unsigned multiply-subtract long

vmlsq_f32Experimentalneon and v7

Floating-point multiply-subtract from accumulator

vmlsq_lane_f32Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_lane_s16Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_lane_s32Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_lane_u16Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_lane_u32Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_laneq_f32Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_laneq_s16Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_laneq_s32Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_laneq_u16Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_laneq_u32Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_n_f32Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_n_s16Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_n_s32Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_n_u16Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_n_u32Experimentalneon and v7

Vector multiply subtract with scalar

vmlsq_s8Experimentalneon and v7

Multiply-subtract from accumulator

vmlsq_s16Experimentalneon and v7

Multiply-subtract from accumulator

vmlsq_s32Experimentalneon and v7

Multiply-subtract from accumulator

vmlsq_u8Experimentalneon and v7

Multiply-subtract from accumulator

vmlsq_u16Experimentalneon and v7

Multiply-subtract from accumulator

vmlsq_u32Experimentalneon and v7

Multiply-subtract from accumulator

vmov_n_f32Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_p8Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_p16Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_s8Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_s16Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_s32Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_s64Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_u8Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_u16Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_u32Experimentalneon and v7

Duplicate vector element to vector or scalar

vmov_n_u64Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovl_s8Experimentalneon and v7

Vector long move.

vmovl_s16Experimentalneon and v7

Vector long move.

vmovl_s32Experimentalneon and v7

Vector long move.

vmovl_u8Experimentalneon and v7

Vector long move.

vmovl_u16Experimentalneon and v7

Vector long move.

vmovl_u32Experimentalneon and v7

Vector long move.

vmovn_s16Experimentalneon and v7

Vector narrow integer.

vmovn_s32Experimentalneon and v7

Vector narrow integer.

vmovn_s64Experimentalneon and v7

Vector narrow integer.

vmovn_u16Experimentalneon and v7

Vector narrow integer.

vmovn_u32Experimentalneon and v7

Vector narrow integer.

vmovn_u64Experimentalneon and v7

Vector narrow integer.

vmovq_n_f32Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_p8Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_p16Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_s8Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_s16Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_s32Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_s64Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_u8Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_u16Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_u32Experimentalneon and v7

Duplicate vector element to vector or scalar

vmovq_n_u64Experimentalneon and v7

Duplicate vector element to vector or scalar

vmul_f32Experimentalneon and v7

Multiply

vmul_lane_f32Experimentalneon and v7

Floating-point multiply

vmul_lane_s16Experimentalneon and v7

Multiply

vmul_lane_s32Experimentalneon and v7

Multiply

vmul_lane_u16Experimentalneon and v7

Multiply

vmul_lane_u32Experimentalneon and v7

Multiply

vmul_laneq_f32Experimentalneon and v7

Floating-point multiply

vmul_laneq_s16Experimentalneon and v7

Multiply

vmul_laneq_s32Experimentalneon and v7

Multiply

vmul_laneq_u16Experimentalneon and v7

Multiply

vmul_laneq_u32Experimentalneon and v7

Multiply

vmul_n_f32Experimentalneon and v7

Vector multiply by scalar

vmul_n_s16Experimentalneon and v7

Vector multiply by scalar

vmul_n_s32Experimentalneon and v7

Vector multiply by scalar

vmul_n_u16Experimentalneon and v7

Vector multiply by scalar

vmul_n_u32Experimentalneon and v7

Vector multiply by scalar

vmul_p8Experimentalneon and v7

Polynomial multiply

vmul_s8Experimentalneon and v7

Multiply

vmul_s16Experimentalneon and v7

Multiply

vmul_s32Experimentalneon and v7

Multiply

vmul_u8Experimentalneon and v7

Multiply

vmul_u16Experimentalneon and v7

Multiply

vmul_u32Experimentalneon and v7

Multiply

vmull_lane_s16Experimentalneon and v7

Vector long multiply by scalar

vmull_lane_s32Experimentalneon and v7

Vector long multiply by scalar

vmull_lane_u16Experimentalneon and v7

Vector long multiply by scalar

vmull_lane_u32Experimentalneon and v7

Vector long multiply by scalar

vmull_laneq_s16Experimentalneon and v7

Vector long multiply by scalar

vmull_laneq_s32Experimentalneon and v7

Vector long multiply by scalar

vmull_laneq_u16Experimentalneon and v7

Vector long multiply by scalar

vmull_laneq_u32Experimentalneon and v7

Vector long multiply by scalar

vmull_p8Experimentalneon and v7

Polynomial multiply long

vmull_s8Experimentalneon and v7

Signed multiply long

vmull_s16Experimentalneon and v7

Signed multiply long

vmull_s32Experimentalneon and v7

Signed multiply long

vmull_u8Experimentalneon and v7

Unsigned multiply long

vmull_u16Experimentalneon and v7

Unsigned multiply long

vmull_u32Experimentalneon and v7

Unsigned multiply long

vmullh_n_s16Experimentalneon and v7

Vector long multiply with scalar

vmullh_n_u16Experimentalneon and v7

Vector long multiply with scalar

vmulls_n_s32Experimentalneon and v7

Vector long multiply with scalar

vmulls_n_u32Experimentalneon and v7

Vector long multiply with scalar

vmulq_f32Experimentalneon and v7

Multiply

vmulq_lane_f32Experimentalneon and v7

Floating-point multiply

vmulq_lane_s16Experimentalneon and v7

Multiply

vmulq_lane_s32Experimentalneon and v7

Multiply

vmulq_lane_u16Experimentalneon and v7

Multiply

vmulq_lane_u32Experimentalneon and v7

Multiply

vmulq_laneq_f32Experimentalneon and v7

Floating-point multiply

vmulq_laneq_s16Experimentalneon and v7

Multiply

vmulq_laneq_s32Experimentalneon and v7

Multiply

vmulq_laneq_u16Experimentalneon and v7

Multiply

vmulq_laneq_u32Experimentalneon and v7

Multiply

vmulq_n_f32Experimentalneon and v7

Vector multiply by scalar

vmulq_n_s16Experimentalneon and v7

Vector multiply by scalar

vmulq_n_s32Experimentalneon and v7

Vector multiply by scalar

vmulq_n_u16Experimentalneon and v7

Vector multiply by scalar

vmulq_n_u32Experimentalneon and v7

Vector multiply by scalar

vmulq_p8Experimentalneon and v7

Polynomial multiply

vmulq_s8Experimentalneon and v7

Multiply

vmulq_s16Experimentalneon and v7

Multiply

vmulq_s32Experimentalneon and v7

Multiply

vmulq_u8Experimentalneon and v7

Multiply

vmulq_u16Experimentalneon and v7

Multiply

vmulq_u32Experimentalneon and v7

Multiply

vmvn_p8Experimentalneon and v7

Vector bitwise not.

vmvn_s8Experimentalneon and v7

Vector bitwise not.

vmvn_s16Experimentalneon and v7

Vector bitwise not.

vmvn_s32Experimentalneon and v7

Vector bitwise not.

vmvn_u8Experimentalneon and v7

Vector bitwise not.

vmvn_u16Experimentalneon and v7

Vector bitwise not.

vmvn_u32Experimentalneon and v7

Vector bitwise not.

vmvnq_p8Experimentalneon and v7

Vector bitwise not.

vmvnq_s8Experimentalneon and v7

Vector bitwise not.

vmvnq_s16Experimentalneon and v7

Vector bitwise not.

vmvnq_s32Experimentalneon and v7

Vector bitwise not.

vmvnq_u8Experimentalneon and v7

Vector bitwise not.

vmvnq_u16Experimentalneon and v7

Vector bitwise not.

vmvnq_u32Experimentalneon and v7

Vector bitwise not.

vneg_f32Experimentalneon and v7

Negate

vneg_s8Experimentalneon and v7

Negate

vneg_s16Experimentalneon and v7

Negate

vneg_s32Experimentalneon and v7

Negate

vnegq_f32Experimentalneon and v7

Negate

vnegq_s8Experimentalneon and v7

Negate

vnegq_s16Experimentalneon and v7

Negate

vnegq_s32Experimentalneon and v7

Negate

vorn_s8Experimentalneon and v7

Vector bitwise inclusive OR NOT

vorn_s16Experimentalneon and v7

Vector bitwise inclusive OR NOT

vorn_s32Experimentalneon and v7

Vector bitwise inclusive OR NOT

vorn_s64Experimentalneon and v7

Vector bitwise inclusive OR NOT

vorn_u8Experimentalneon and v7

Vector bitwise inclusive OR NOT

vorn_u16Experimentalneon and v7

Vector bitwise inclusive OR NOT

vorn_u32Experimentalneon and v7

Vector bitwise inclusive OR NOT

vorn_u64Experimentalneon and v7

Vector bitwise inclusive OR NOT

vornq_s8Experimentalneon and v7

Vector bitwise inclusive OR NOT

vornq_s16Experimentalneon and v7

Vector bitwise inclusive OR NOT

vornq_s32Experimentalneon and v7

Vector bitwise inclusive OR NOT

vornq_s64Experimentalneon and v7

Vector bitwise inclusive OR NOT

vornq_u8Experimentalneon and v7

Vector bitwise inclusive OR NOT

vornq_u16Experimentalneon and v7

Vector bitwise inclusive OR NOT

vornq_u32Experimentalneon and v7

Vector bitwise inclusive OR NOT

vornq_u64Experimentalneon and v7

Vector bitwise inclusive OR NOT

vorr_s8Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorr_s16Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorr_s32Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorr_s64Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorr_u8Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorr_u16Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorr_u32Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorr_u64Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorrq_s8Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorrq_s16Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorrq_s32Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorrq_s64Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorrq_u8Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorrq_u16Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorrq_u32Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vorrq_u64Experimentalneon and v7

Vector bitwise or (immediate, inclusive)

vpadal_s8Experimentalneon and v7

Signed Add and Accumulate Long Pairwise.

vpadal_s16Experimentalneon and v7

Signed Add and Accumulate Long Pairwise.

vpadal_s32Experimentalneon and v7

Signed Add and Accumulate Long Pairwise.

vpadal_u8Experimentalneon and v7

Unsigned Add and Accumulate Long Pairwise.

vpadal_u16Experimentalneon and v7

Unsigned Add and Accumulate Long Pairwise.

vpadal_u32Experimentalneon and v7

Unsigned Add and Accumulate Long Pairwise.

vpadalq_s8Experimentalneon and v7

Signed Add and Accumulate Long Pairwise.

vpadalq_s16Experimentalneon and v7

Signed Add and Accumulate Long Pairwise.

vpadalq_s32Experimentalneon and v7

Signed Add and Accumulate Long Pairwise.

vpadalq_u8Experimentalneon and v7

Unsigned Add and Accumulate Long Pairwise.

vpadalq_u16Experimentalneon and v7

Unsigned Add and Accumulate Long Pairwise.

vpadalq_u32Experimentalneon and v7

Unsigned Add and Accumulate Long Pairwise.

vpadd_s8Experimentalneon and v7

Add pairwise.

vpadd_s16Experimentalneon and v7

Add pairwise.

vpadd_s32Experimentalneon and v7

Add pairwise.

vpadd_u8Experimentalneon and v7

Add pairwise.

vpadd_u16Experimentalneon and v7

Add pairwise.

vpadd_u32Experimentalneon and v7

Add pairwise.

vpaddl_s8Experimentalneon and v7

Signed Add Long Pairwise.

vpaddl_s16Experimentalneon and v7

Signed Add Long Pairwise.

vpaddl_s32Experimentalneon and v7

Signed Add Long Pairwise.

vpaddl_u8Experimentalneon and v7

Unsigned Add Long Pairwise.

vpaddl_u16Experimentalneon and v7

Unsigned Add Long Pairwise.

vpaddl_u32Experimentalneon and v7

Unsigned Add Long Pairwise.

vpaddlq_s8Experimentalneon and v7

Signed Add Long Pairwise.

vpaddlq_s16Experimentalneon and v7

Signed Add Long Pairwise.

vpaddlq_s32Experimentalneon and v7

Signed Add Long Pairwise.

vpaddlq_u8Experimentalneon and v7

Unsigned Add Long Pairwise.

vpaddlq_u16Experimentalneon and v7

Unsigned Add Long Pairwise.

vpaddlq_u32Experimentalneon and v7

Unsigned Add Long Pairwise.

vpmax_f32Experimentalneon and v7

Folding maximum of adjacent pairs

vpmax_s8Experimentalneon and v7

Folding maximum of adjacent pairs

vpmax_s16Experimentalneon and v7

Folding maximum of adjacent pairs

vpmax_s32Experimentalneon and v7

Folding maximum of adjacent pairs

vpmax_u8Experimentalneon and v7

Folding maximum of adjacent pairs

vpmax_u16Experimentalneon and v7

Folding maximum of adjacent pairs

vpmax_u32Experimentalneon and v7

Folding maximum of adjacent pairs

vpmin_f32Experimentalneon and v7

Folding minimum of adjacent pairs

vpmin_s8Experimentalneon and v7

Folding minimum of adjacent pairs

vpmin_s16Experimentalneon and v7

Folding minimum of adjacent pairs

vpmin_s32Experimentalneon and v7

Folding minimum of adjacent pairs

vpmin_u8Experimentalneon and v7

Folding minimum of adjacent pairs

vpmin_u16Experimentalneon and v7

Folding minimum of adjacent pairs

vpmin_u32Experimentalneon and v7

Folding minimum of adjacent pairs

vqabs_s8Experimentalneon and v7

Singned saturating Absolute value

vqabs_s16Experimentalneon and v7

Singned saturating Absolute value

vqabs_s32Experimentalneon and v7

Singned saturating Absolute value

vqabsq_s8Experimentalneon and v7

Singned saturating Absolute value

vqabsq_s16Experimentalneon and v7

Singned saturating Absolute value

vqabsq_s32Experimentalneon and v7

Singned saturating Absolute value

vqadd_s8Experimentalneon and v7

Saturating add

vqadd_s16Experimentalneon and v7

Saturating add

vqadd_s32Experimentalneon and v7

Saturating add

vqadd_s64Experimentalneon and v7

Saturating add

vqadd_u8Experimentalneon and v7

Saturating add

vqadd_u16Experimentalneon and v7

Saturating add

vqadd_u32Experimentalneon and v7

Saturating add

vqadd_u64Experimentalneon and v7

Saturating add

vqaddq_s8Experimentalneon and v7

Saturating add

vqaddq_s16Experimentalneon and v7

Saturating add

vqaddq_s32Experimentalneon and v7

Saturating add

vqaddq_s64Experimentalneon and v7

Saturating add

vqaddq_u8Experimentalneon and v7

Saturating add

vqaddq_u16Experimentalneon and v7

Saturating add

vqaddq_u32Experimentalneon and v7

Saturating add

vqaddq_u64Experimentalneon and v7

Saturating add

vqdmlal_lane_s16Experimentalneon and v7

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_lane_s32Experimentalneon and v7

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_n_s16Experimentalneon and v7

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_n_s32Experimentalneon and v7

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_s16Experimentalneon and v7

Signed saturating doubling multiply-add long

vqdmlal_s32Experimentalneon and v7

Signed saturating doubling multiply-add long

vqdmlsl_lane_s16Experimentalneon and v7

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_lane_s32Experimentalneon and v7

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_n_s16Experimentalneon and v7

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_n_s32Experimentalneon and v7

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_s16Experimentalneon and v7

Signed saturating doubling multiply-subtract long

vqdmlsl_s32Experimentalneon and v7

Signed saturating doubling multiply-subtract long

vqdmulh_n_s16Experimentalneon and v7

Vector saturating doubling multiply high with scalar

vqdmulh_n_s32Experimentalneon and v7

Vector saturating doubling multiply high with scalar

vqdmulh_s16Experimentalneon and v7

Signed saturating doubling multiply returning high half

vqdmulh_s32Experimentalneon and v7

Signed saturating doubling multiply returning high half

vqdmulhq_nq_s16Experimentalneon and v7

Vector saturating doubling multiply high with scalar

vqdmulhq_nq_s32Experimentalneon and v7

Vector saturating doubling multiply high with scalar

vqdmulhq_s16Experimentalneon and v7

Signed saturating doubling multiply returning high half

vqdmulhq_s32Experimentalneon and v7

Signed saturating doubling multiply returning high half

vqdmull_lane_s16Experimentalneon and v7

Vector saturating doubling long multiply by scalar

vqdmull_lane_s32Experimentalneon and v7

Vector saturating doubling long multiply by scalar

vqdmull_n_s16Experimentalneon and v7

Vector saturating doubling long multiply with scalar

vqdmull_n_s32Experimentalneon and v7

Vector saturating doubling long multiply with scalar

vqdmull_s16Experimentalneon and v7

Signed saturating doubling multiply long

vqdmull_s32Experimentalneon and v7

Signed saturating doubling multiply long

vqmovn_s16Experimentalneon and v7

Signed saturating extract narrow

vqmovn_s32Experimentalneon and v7

Signed saturating extract narrow

vqmovn_s64Experimentalneon and v7

Signed saturating extract narrow

vqmovn_u16Experimentalneon and v7

Unsigned saturating extract narrow

vqmovn_u32Experimentalneon and v7

Unsigned saturating extract narrow

vqmovn_u64Experimentalneon and v7

Unsigned saturating extract narrow

vqmovun_s16Experimentalneon and v7

Signed saturating extract unsigned narrow

vqmovun_s32Experimentalneon and v7

Signed saturating extract unsigned narrow

vqmovun_s64Experimentalneon and v7

Signed saturating extract unsigned narrow

vqneg_s8Experimentalneon and v7

Signed saturating negate

vqneg_s16Experimentalneon and v7

Signed saturating negate

vqneg_s32Experimentalneon and v7

Signed saturating negate

vqnegq_s8Experimentalneon and v7

Signed saturating negate

vqnegq_s16Experimentalneon and v7

Signed saturating negate

vqnegq_s32Experimentalneon and v7

Signed saturating negate

vqrdmlah_lane_s16Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_lane_s32Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_laneq_s16Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_laneq_s32Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_s16Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_s32Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_lane_s16Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_lane_s32Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_laneq_s16Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_laneq_s32Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_s16Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_s32Experimentalneon and v7

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlsh_lane_s16Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_lane_s32Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_laneq_s16Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_laneq_s32Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_s16Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_s32Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_lane_s16Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_lane_s32Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_laneq_s16Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_laneq_s32Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_s16Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_s32Experimentalneon and v7

Signed saturating rounding doubling multiply subtract returning high half

vqrdmulh_lane_s16Experimentalneon and v7

Vector rounding saturating doubling multiply high by scalar

vqrdmulh_lane_s32Experimentalneon and v7

Vector rounding saturating doubling multiply high by scalar

vqrdmulh_laneq_s16Experimentalneon and v7

Vector rounding saturating doubling multiply high by scalar

vqrdmulh_laneq_s32Experimentalneon and v7

Vector rounding saturating doubling multiply high by scalar

vqrdmulh_n_s16Experimentalneon and v7

Vector saturating rounding doubling multiply high with scalar

vqrdmulh_n_s32Experimentalneon and v7

Vector saturating rounding doubling multiply high with scalar

vqrdmulh_s16Experimentalneon and v7

Signed saturating rounding doubling multiply returning high half

vqrdmulh_s32Experimentalneon and v7

Signed saturating rounding doubling multiply returning high half

vqrdmulhq_lane_s16Experimentalneon and v7

Vector rounding saturating doubling multiply high by scalar

vqrdmulhq_lane_s32Experimentalneon and v7

Vector rounding saturating doubling multiply high by scalar

vqrdmulhq_laneq_s16Experimentalneon and v7

Vector rounding saturating doubling multiply high by scalar

vqrdmulhq_laneq_s32Experimentalneon and v7

Vector rounding saturating doubling multiply high by scalar

vqrdmulhq_n_s16Experimentalneon and v7

Vector saturating rounding doubling multiply high with scalar

vqrdmulhq_n_s32Experimentalneon and v7

Vector saturating rounding doubling multiply high with scalar

vqrdmulhq_s16Experimentalneon and v7

Signed saturating rounding doubling multiply returning high half

vqrdmulhq_s32Experimentalneon and v7

Signed saturating rounding doubling multiply returning high half

vqrshl_s8Experimentalneon and v7

Signed saturating rounding shift left

vqrshl_s16Experimentalneon and v7

Signed saturating rounding shift left

vqrshl_s32Experimentalneon and v7

Signed saturating rounding shift left

vqrshl_s64Experimentalneon and v7

Signed saturating rounding shift left

vqrshl_u8Experimentalneon and v7

Unsigned signed saturating rounding shift left

vqrshl_u16Experimentalneon and v7

Unsigned signed saturating rounding shift left

vqrshl_u32Experimentalneon and v7

Unsigned signed saturating rounding shift left

vqrshl_u64Experimentalneon and v7

Unsigned signed saturating rounding shift left

vqrshlq_s8Experimentalneon and v7

Signed saturating rounding shift left

vqrshlq_s16Experimentalneon and v7

Signed saturating rounding shift left

vqrshlq_s32Experimentalneon and v7

Signed saturating rounding shift left

vqrshlq_s64Experimentalneon and v7

Signed saturating rounding shift left

vqrshlq_u8Experimentalneon and v7

Unsigned signed saturating rounding shift left

vqrshlq_u16Experimentalneon and v7

Unsigned signed saturating rounding shift left

vqrshlq_u32Experimentalneon and v7

Unsigned signed saturating rounding shift left

vqrshlq_u64Experimentalneon and v7

Unsigned signed saturating rounding shift left

vqrshrn_n_s16Experimentalneon,v7

Signed saturating rounded shift right narrow

vqrshrn_n_s32Experimentalneon,v7

Signed saturating rounded shift right narrow

vqrshrn_n_s64Experimentalneon,v7

Signed saturating rounded shift right narrow

vqrshrn_n_u16Experimentalneon,v7

Unsigned signed saturating rounded shift right narrow

vqrshrn_n_u32Experimentalneon,v7

Unsigned signed saturating rounded shift right narrow

vqrshrn_n_u64Experimentalneon,v7

Unsigned signed saturating rounded shift right narrow

vqrshrun_n_s16Experimentalneon,v7

Signed saturating rounded shift right unsigned narrow

vqrshrun_n_s32Experimentalneon,v7

Signed saturating rounded shift right unsigned narrow

vqrshrun_n_s64Experimentalneon,v7

Signed saturating rounded shift right unsigned narrow

vqshl_n_s8Experimentalneon and v7

Signed saturating shift left

vqshl_n_s16Experimentalneon and v7

Signed saturating shift left

vqshl_n_s32Experimentalneon and v7

Signed saturating shift left

vqshl_n_s64Experimentalneon and v7

Signed saturating shift left

vqshl_n_u8Experimentalneon and v7

Unsigned saturating shift left

vqshl_n_u16Experimentalneon and v7

Unsigned saturating shift left

vqshl_n_u32Experimentalneon and v7

Unsigned saturating shift left

vqshl_n_u64Experimentalneon and v7

Unsigned saturating shift left

vqshl_s8Experimentalneon and v7

Signed saturating shift left

vqshl_s16Experimentalneon and v7

Signed saturating shift left

vqshl_s32Experimentalneon and v7

Signed saturating shift left

vqshl_s64Experimentalneon and v7

Signed saturating shift left

vqshl_u8Experimentalneon and v7

Unsigned saturating shift left

vqshl_u16Experimentalneon and v7

Unsigned saturating shift left

vqshl_u32Experimentalneon and v7

Unsigned saturating shift left

vqshl_u64Experimentalneon and v7

Unsigned saturating shift left

vqshlq_n_s8Experimentalneon and v7

Signed saturating shift left

vqshlq_n_s16Experimentalneon and v7

Signed saturating shift left

vqshlq_n_s32Experimentalneon and v7

Signed saturating shift left

vqshlq_n_s64Experimentalneon and v7

Signed saturating shift left

vqshlq_n_u8Experimentalneon and v7

Unsigned saturating shift left

vqshlq_n_u16Experimentalneon and v7

Unsigned saturating shift left

vqshlq_n_u32Experimentalneon and v7

Unsigned saturating shift left

vqshlq_n_u64Experimentalneon and v7

Unsigned saturating shift left

vqshlq_s8Experimentalneon and v7

Signed saturating shift left

vqshlq_s16Experimentalneon and v7

Signed saturating shift left

vqshlq_s32Experimentalneon and v7

Signed saturating shift left

vqshlq_s64Experimentalneon and v7

Signed saturating shift left

vqshlq_u8Experimentalneon and v7

Unsigned saturating shift left

vqshlq_u16Experimentalneon and v7

Unsigned saturating shift left

vqshlq_u32Experimentalneon and v7

Unsigned saturating shift left

vqshlq_u64Experimentalneon and v7

Unsigned saturating shift left

vqshrn_n_s16Experimentalneon,v7

Signed saturating shift right narrow

vqshrn_n_s32Experimentalneon,v7

Signed saturating shift right narrow

vqshrn_n_s64Experimentalneon,v7

Signed saturating shift right narrow

vqshrn_n_u16Experimentalneon,v7

Unsigned saturating shift right narrow

vqshrn_n_u32Experimentalneon,v7

Unsigned saturating shift right narrow

vqshrn_n_u64Experimentalneon,v7

Unsigned saturating shift right narrow

vqshrun_n_s16Experimentalneon,v7

Signed saturating shift right unsigned narrow

vqshrun_n_s32Experimentalneon,v7

Signed saturating shift right unsigned narrow

vqshrun_n_s64Experimentalneon,v7

Signed saturating shift right unsigned narrow

vqsub_s8Experimentalneon and v7

Saturating subtract

vqsub_s16Experimentalneon and v7

Saturating subtract

vqsub_s32Experimentalneon and v7

Saturating subtract

vqsub_s64Experimentalneon and v7

Saturating subtract

vqsub_u8Experimentalneon and v7

Saturating subtract

vqsub_u16Experimentalneon and v7

Saturating subtract

vqsub_u32Experimentalneon and v7

Saturating subtract

vqsub_u64Experimentalneon and v7

Saturating subtract

vqsubq_s8Experimentalneon and v7

Saturating subtract

vqsubq_s16Experimentalneon and v7

Saturating subtract

vqsubq_s32Experimentalneon and v7

Saturating subtract

vqsubq_s64Experimentalneon and v7

Saturating subtract

vqsubq_u8Experimentalneon and v7

Saturating subtract

vqsubq_u16Experimentalneon and v7

Saturating subtract

vqsubq_u32Experimentalneon and v7

Saturating subtract

vqsubq_u64Experimentalneon and v7

Saturating subtract

vraddhn_high_s16Experimentalneon and v7

Rounding Add returning High Narrow (high half).

vraddhn_high_s32Experimentalneon and v7

Rounding Add returning High Narrow (high half).

vraddhn_high_s64Experimentalneon and v7

Rounding Add returning High Narrow (high half).

vraddhn_high_u16Experimentalneon and v7

Rounding Add returning High Narrow (high half).

vraddhn_high_u32Experimentalneon and v7

Rounding Add returning High Narrow (high half).

vraddhn_high_u64Experimentalneon and v7

Rounding Add returning High Narrow (high half).

vraddhn_s16Experimentalneon and v7

Rounding Add returning High Narrow.

vraddhn_s32Experimentalneon and v7

Rounding Add returning High Narrow.

vraddhn_s64Experimentalneon and v7

Rounding Add returning High Narrow.

vraddhn_u16Experimentalneon and v7

Rounding Add returning High Narrow.

vraddhn_u32Experimentalneon and v7

Rounding Add returning High Narrow.

vraddhn_u64Experimentalneon and v7

Rounding Add returning High Narrow.

vrecpe_f32Experimentalneon and v7

Reciprocal estimate.

vrecpeq_f32Experimentalneon and v7

Reciprocal estimate.

vreinterpret_f32_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_f32_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_f32_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_f32_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_f32_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_f32_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_f32_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_f32_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_f32_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_f32_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p8_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_p16_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s8_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s16_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s32_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_s64_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u8_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u16_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u32_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpret_u64_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_f32_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p8_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_p16_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s8_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s16_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s32_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_s64_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u8_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_u32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u16_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u32_u64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_f32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_p8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_p16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_s8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_s16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_s32Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_s64Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_u8Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_u16Experimentalneon and v7

Vector reinterpret cast operation

vreinterpretq_u64_u32Experimentalneon and v7

Vector reinterpret cast operation

vrev16_p8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev16_s8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev16_u8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev16q_p8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev16q_s8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev16q_u8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32_p8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32_p16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32_s8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32_s16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32_u8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32_u16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32q_p8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32q_p16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32q_s8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32q_s16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32q_u8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev32q_u16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64_f32Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64_p8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64_p16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64_s8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64_s16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64_s32Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64_u8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64_u16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64_u32Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64q_f32Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64q_p8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64q_p16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64q_s8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64q_s16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64q_s32Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64q_u8Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64q_u16Experimentalneon and v7

Reversing vector elements (swap endianness)

vrev64q_u32Experimentalneon and v7

Reversing vector elements (swap endianness)

vrhadd_s8Experimentalneon and v7

Rounding halving add

vrhadd_s16Experimentalneon and v7

Rounding halving add

vrhadd_s32Experimentalneon and v7

Rounding halving add

vrhadd_u8Experimentalneon and v7

Rounding halving add

vrhadd_u16Experimentalneon and v7

Rounding halving add

vrhadd_u32Experimentalneon and v7

Rounding halving add

vrhaddq_s8Experimentalneon and v7

Rounding halving add

vrhaddq_s16Experimentalneon and v7

Rounding halving add

vrhaddq_s32Experimentalneon and v7

Rounding halving add

vrhaddq_u8Experimentalneon and v7

Rounding halving add

vrhaddq_u16Experimentalneon and v7

Rounding halving add

vrhaddq_u32Experimentalneon and v7

Rounding halving add

vrndn_f32Experimentalneon and fp-armv8,v8

Floating-point round to integral, to nearest with ties to even

vrndnq_f32Experimentalneon and fp-armv8,v8

Floating-point round to integral, to nearest with ties to even

vrshl_s8Experimentalneon and v7

Signed rounding shift left

vrshl_s16Experimentalneon and v7

Signed rounding shift left

vrshl_s32Experimentalneon and v7

Signed rounding shift left

vrshl_s64Experimentalneon and v7

Signed rounding shift left

vrshl_u8Experimentalneon and v7

Unsigned rounding shift left

vrshl_u16Experimentalneon and v7

Unsigned rounding shift left

vrshl_u32Experimentalneon and v7

Unsigned rounding shift left

vrshl_u64Experimentalneon and v7

Unsigned rounding shift left

vrshlq_s8Experimentalneon and v7

Signed rounding shift left

vrshlq_s16Experimentalneon and v7

Signed rounding shift left

vrshlq_s32Experimentalneon and v7

Signed rounding shift left

vrshlq_s64Experimentalneon and v7

Signed rounding shift left

vrshlq_u8Experimentalneon and v7

Unsigned rounding shift left

vrshlq_u16Experimentalneon and v7

Unsigned rounding shift left

vrshlq_u32Experimentalneon and v7

Unsigned rounding shift left

vrshlq_u64Experimentalneon and v7

Unsigned rounding shift left

vrshr_n_s8Experimentalneon and v7

Signed rounding shift right

vrshr_n_s16Experimentalneon and v7

Signed rounding shift right

vrshr_n_s32Experimentalneon and v7

Signed rounding shift right

vrshr_n_s64Experimentalneon and v7

Signed rounding shift right

vrshr_n_u8Experimentalneon and v7

Unsigned rounding shift right

vrshr_n_u16Experimentalneon and v7

Unsigned rounding shift right

vrshr_n_u32Experimentalneon and v7

Unsigned rounding shift right

vrshr_n_u64Experimentalneon and v7

Unsigned rounding shift right

vrshrn_n_s16Experimentalneon,v7

Rounding shift right narrow

vrshrn_n_s32Experimentalneon,v7

Rounding shift right narrow

vrshrn_n_s64Experimentalneon,v7

Rounding shift right narrow

vrshrn_n_u16Experimentalneon and v7

Rounding shift right narrow

vrshrn_n_u32Experimentalneon and v7

Rounding shift right narrow

vrshrn_n_u64Experimentalneon and v7

Rounding shift right narrow

vrshrq_n_s8Experimentalneon and v7

Signed rounding shift right

vrshrq_n_s16Experimentalneon and v7

Signed rounding shift right

vrshrq_n_s32Experimentalneon and v7

Signed rounding shift right

vrshrq_n_s64Experimentalneon and v7

Signed rounding shift right

vrshrq_n_u8Experimentalneon and v7

Unsigned rounding shift right

vrshrq_n_u16Experimentalneon and v7

Unsigned rounding shift right

vrshrq_n_u32Experimentalneon and v7

Unsigned rounding shift right

vrshrq_n_u64Experimentalneon and v7

Unsigned rounding shift right

vrsqrte_f32Experimentalneon and v7

Reciprocal square-root estimate.

vrsqrteq_f32Experimentalneon and v7

Reciprocal square-root estimate.

vrsra_n_s8Experimentalneon and v7

Signed rounding shift right and accumulate

vrsra_n_s16Experimentalneon and v7

Signed rounding shift right and accumulate

vrsra_n_s32Experimentalneon and v7

Signed rounding shift right and accumulate

vrsra_n_s64Experimentalneon and v7

Signed rounding shift right and accumulate

vrsra_n_u8Experimentalneon and v7

Unsigned rounding shift right and accumulate

vrsra_n_u16Experimentalneon and v7

Unsigned rounding shift right and accumulate

vrsra_n_u32Experimentalneon and v7

Unsigned rounding shift right and accumulate

vrsra_n_u64Experimentalneon and v7

Unsigned rounding shift right and accumulate

vrsraq_n_s8Experimentalneon and v7

Signed rounding shift right and accumulate

vrsraq_n_s16Experimentalneon and v7

Signed rounding shift right and accumulate

vrsraq_n_s32Experimentalneon and v7

Signed rounding shift right and accumulate

vrsraq_n_s64Experimentalneon and v7

Signed rounding shift right and accumulate

vrsraq_n_u8Experimentalneon and v7

Unsigned rounding shift right and accumulate

vrsraq_n_u16Experimentalneon and v7

Unsigned rounding shift right and accumulate

vrsraq_n_u32Experimentalneon and v7

Unsigned rounding shift right and accumulate

vrsraq_n_u64Experimentalneon and v7

Unsigned rounding shift right and accumulate

vset_lane_f32Experimentalneon and v7

Insert vector element from another vector element

vset_lane_p8Experimentalneon and v7

Insert vector element from another vector element

vset_lane_p16Experimentalneon and v7

Insert vector element from another vector element

vset_lane_p64Experimentalneon,aes and crypto,v8

Insert vector element from another vector element

vset_lane_s8Experimentalneon and v7

Insert vector element from another vector element

vset_lane_s16Experimentalneon and v7

Insert vector element from another vector element

vset_lane_s32Experimentalneon and v7

Insert vector element from another vector element

vset_lane_s64Experimentalneon and v7

Insert vector element from another vector element

vset_lane_u8Experimentalneon and v7

Insert vector element from another vector element

vset_lane_u16Experimentalneon and v7

Insert vector element from another vector element

vset_lane_u32Experimentalneon and v7

Insert vector element from another vector element

vset_lane_u64Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_f32Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_p8Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_p16Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_p64Experimentalneon,aes and crypto,v8

Insert vector element from another vector element

vsetq_lane_s8Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_s16Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_s32Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_s64Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_u8Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_u16Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_u32Experimentalneon and v7

Insert vector element from another vector element

vsetq_lane_u64Experimentalneon and v7

Insert vector element from another vector element

vsha1cq_u32Experimentalcrypto,v8

SHA1 hash update accelerator, choose.

vsha1h_u32Experimentalcrypto,v8

SHA1 fixed rotate.

vsha1mq_u32Experimentalcrypto,v8

SHA1 hash update accelerator, majority.

vsha1pq_u32Experimentalcrypto,v8

SHA1 hash update accelerator, parity.

vsha1su0q_u32Experimentalcrypto,v8

SHA1 schedule update accelerator, first part.

vsha1su1q_u32Experimentalcrypto,v8

SHA1 schedule update accelerator, second part.

vsha256h2q_u32Experimentalcrypto,v8

SHA256 hash update accelerator, upper part.

vsha256hq_u32Experimentalcrypto,v8

SHA256 hash update accelerator.

vsha256su0q_u32Experimentalcrypto,v8

SHA256 schedule update accelerator, first part.

vsha256su1q_u32Experimentalcrypto,v8

SHA256 schedule update accelerator, second part.

vshl_n_s8Experimentalneon and v7

Shift left

vshl_n_s16Experimentalneon and v7

Shift left

vshl_n_s32Experimentalneon and v7

Shift left

vshl_n_s64Experimentalneon and v7

Shift left

vshl_n_u8Experimentalneon and v7

Shift left

vshl_n_u16Experimentalneon and v7

Shift left

vshl_n_u32Experimentalneon and v7

Shift left

vshl_n_u64Experimentalneon and v7

Shift left

vshl_s8Experimentalneon and v7

Signed Shift left

vshl_s16Experimentalneon and v7

Signed Shift left

vshl_s32Experimentalneon and v7

Signed Shift left

vshl_s64Experimentalneon and v7

Signed Shift left

vshl_u8Experimentalneon and v7

Unsigned Shift left

vshl_u16Experimentalneon and v7

Unsigned Shift left

vshl_u32Experimentalneon and v7

Unsigned Shift left

vshl_u64Experimentalneon and v7

Unsigned Shift left

vshll_n_s8Experimentalneon and v7

Signed shift left long

vshll_n_s16Experimentalneon and v7

Signed shift left long

vshll_n_s32Experimentalneon and v7

Signed shift left long

vshll_n_u8Experimentalneon and v7

Signed shift left long

vshll_n_u16Experimentalneon and v7

Signed shift left long

vshll_n_u32Experimentalneon and v7

Signed shift left long

vshlq_n_s8Experimentalneon and v7

Shift left

vshlq_n_s16Experimentalneon and v7

Shift left

vshlq_n_s32Experimentalneon and v7

Shift left

vshlq_n_s64Experimentalneon and v7

Shift left

vshlq_n_u8Experimentalneon and v7

Shift left

vshlq_n_u16Experimentalneon and v7

Shift left

vshlq_n_u32Experimentalneon and v7

Shift left

vshlq_n_u64Experimentalneon and v7

Shift left

vshlq_s8Experimentalneon and v7

Signed Shift left

vshlq_s16Experimentalneon and v7

Signed Shift left

vshlq_s32Experimentalneon and v7

Signed Shift left

vshlq_s64Experimentalneon and v7

Signed Shift left

vshlq_u8Experimentalneon and v7

Unsigned Shift left

vshlq_u16Experimentalneon and v7

Unsigned Shift left

vshlq_u32Experimentalneon and v7

Unsigned Shift left

vshlq_u64Experimentalneon and v7

Unsigned Shift left

vshr_n_s8Experimentalneon and v7

Shift right

vshr_n_s16Experimentalneon and v7

Shift right

vshr_n_s32Experimentalneon and v7

Shift right

vshr_n_s64Experimentalneon and v7

Shift right

vshr_n_u8Experimentalneon and v7

Shift right

vshr_n_u16Experimentalneon and v7

Shift right

vshr_n_u32Experimentalneon and v7

Shift right

vshr_n_u64Experimentalneon and v7

Shift right

vshrn_n_s16Experimentalneon and v7

Shift right narrow

vshrn_n_s32Experimentalneon and v7

Shift right narrow

vshrn_n_s64Experimentalneon and v7

Shift right narrow

vshrn_n_u16Experimentalneon and v7

Shift right narrow

vshrn_n_u32Experimentalneon and v7

Shift right narrow

vshrn_n_u64Experimentalneon and v7

Shift right narrow

vshrq_n_s8Experimentalneon and v7

Shift right

vshrq_n_s16Experimentalneon and v7

Shift right

vshrq_n_s32Experimentalneon and v7

Shift right

vshrq_n_s64Experimentalneon and v7

Shift right

vshrq_n_u8Experimentalneon and v7

Shift right

vshrq_n_u16Experimentalneon and v7

Shift right

vshrq_n_u32Experimentalneon and v7

Shift right

vshrq_n_u64Experimentalneon and v7

Shift right

vsli_n_p8Experimentalneon,v7

Shift Left and Insert (immediate)

vsli_n_p16Experimentalneon,v7

Shift Left and Insert (immediate)

vsli_n_s8Experimentalneon,v7

Shift Left and Insert (immediate)

vsli_n_s16Experimentalneon,v7

Shift Left and Insert (immediate)

vsli_n_s32Experimentalneon,v7

Shift Left and Insert (immediate)

vsli_n_s64Experimentalneon,v7

Shift Left and Insert (immediate)

vsli_n_u8Experimentalneon,v7

Shift Left and Insert (immediate)

vsli_n_u16Experimentalneon,v7

Shift Left and Insert (immediate)

vsli_n_u32Experimentalneon,v7

Shift Left and Insert (immediate)

vsli_n_u64Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_p8Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_p16Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_s8Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_s16Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_s32Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_s64Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_u8Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_u16Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_u32Experimentalneon,v7

Shift Left and Insert (immediate)

vsliq_n_u64Experimentalneon,v7

Shift Left and Insert (immediate)

vsra_n_s8Experimentalneon and v7

Signed shift right and accumulate

vsra_n_s16Experimentalneon and v7

Signed shift right and accumulate

vsra_n_s32Experimentalneon and v7

Signed shift right and accumulate

vsra_n_s64Experimentalneon and v7

Signed shift right and accumulate

vsra_n_u8Experimentalneon and v7

Unsigned shift right and accumulate

vsra_n_u16Experimentalneon and v7

Unsigned shift right and accumulate

vsra_n_u32Experimentalneon and v7

Unsigned shift right and accumulate

vsra_n_u64Experimentalneon and v7

Unsigned shift right and accumulate

vsraq_n_s8Experimentalneon and v7

Signed shift right and accumulate

vsraq_n_s16Experimentalneon and v7

Signed shift right and accumulate

vsraq_n_s32Experimentalneon and v7

Signed shift right and accumulate

vsraq_n_s64Experimentalneon and v7

Signed shift right and accumulate

vsraq_n_u8Experimentalneon and v7

Unsigned shift right and accumulate

vsraq_n_u16Experimentalneon and v7

Unsigned shift right and accumulate

vsraq_n_u32Experimentalneon and v7

Unsigned shift right and accumulate

vsraq_n_u64Experimentalneon and v7

Unsigned shift right and accumulate

vsri_n_p8Experimentalneon,v7

Shift Right and Insert (immediate)

vsri_n_p16Experimentalneon,v7

Shift Right and Insert (immediate)

vsri_n_s8Experimentalneon,v7

Shift Right and Insert (immediate)

vsri_n_s16Experimentalneon,v7

Shift Right and Insert (immediate)

vsri_n_s32Experimentalneon,v7

Shift Right and Insert (immediate)

vsri_n_s64Experimentalneon,v7

Shift Right and Insert (immediate)

vsri_n_u8Experimentalneon,v7

Shift Right and Insert (immediate)

vsri_n_u16Experimentalneon,v7

Shift Right and Insert (immediate)

vsri_n_u32Experimentalneon,v7

Shift Right and Insert (immediate)

vsri_n_u64Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_p8Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_p16Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_s8Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_s16Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_s32Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_s64Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_u8Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_u16Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_u32Experimentalneon,v7

Shift Right and Insert (immediate)

vsriq_n_u64Experimentalneon,v7

Shift Right and Insert (immediate)

vst1_f32Experimentalneon,v7
vst1_p8Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1_p16Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1_s8Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1_s16Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1_s32Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1_s64Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1_u8Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1_u16Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1_u32Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1_u64Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_f32Experimentalneon,v7
vst1q_p8Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_p16Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_s8Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_s16Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_s32Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_s64Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_u8Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_u16Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_u32Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vst1q_u64Experimentalneon,v7

Store multiple single-element structures from one, two, three, or four registers.

vsub_f32Experimentalneon and v7

Subtract

vsub_s8Experimentalneon and v7

Subtract

vsub_s16Experimentalneon and v7

Subtract

vsub_s32Experimentalneon and v7

Subtract

vsub_s64Experimentalneon and v7

Subtract

vsub_u8Experimentalneon and v7

Subtract

vsub_u16Experimentalneon and v7

Subtract

vsub_u32Experimentalneon and v7

Subtract

vsub_u64Experimentalneon and v7

Subtract

vsubhn_high_s16Experimentalneon and v7

Subtract returning high narrow

vsubhn_high_s32Experimentalneon and v7

Subtract returning high narrow

vsubhn_high_s64Experimentalneon and v7

Subtract returning high narrow

vsubhn_high_u16Experimentalneon and v7

Subtract returning high narrow

vsubhn_high_u32Experimentalneon and v7

Subtract returning high narrow

vsubhn_high_u64Experimentalneon and v7

Subtract returning high narrow

vsubhn_s16Experimentalneon and v7

Subtract returning high narrow

vsubhn_s32Experimentalneon and v7

Subtract returning high narrow

vsubhn_s64Experimentalneon and v7

Subtract returning high narrow

vsubhn_u16Experimentalneon and v7

Subtract returning high narrow

vsubhn_u32Experimentalneon and v7

Subtract returning high narrow

vsubhn_u64Experimentalneon and v7

Subtract returning high narrow

vsubl_s8Experimentalneon and v7

Signed Subtract Long

vsubl_s16Experimentalneon and v7

Signed Subtract Long

vsubl_s32Experimentalneon and v7

Signed Subtract Long

vsubl_u8Experimentalneon and v7

Unsigned Subtract Long

vsubl_u16Experimentalneon and v7

Unsigned Subtract Long

vsubl_u32Experimentalneon and v7

Unsigned Subtract Long

vsubq_f32Experimentalneon and v7

Subtract

vsubq_s8Experimentalneon and v7

Subtract

vsubq_s16Experimentalneon and v7

Subtract

vsubq_s32Experimentalneon and v7

Subtract

vsubq_s64Experimentalneon and v7

Subtract

vsubq_u8Experimentalneon and v7

Subtract

vsubq_u16Experimentalneon and v7

Subtract

vsubq_u32Experimentalneon and v7

Subtract

vsubq_u64Experimentalneon and v7

Subtract

vsubw_s8Experimentalneon and v7

Signed Subtract Wide

vsubw_s16Experimentalneon and v7

Signed Subtract Wide

vsubw_s32Experimentalneon and v7

Signed Subtract Wide

vsubw_u8Experimentalneon and v7

Unsigned Subtract Wide

vsubw_u16Experimentalneon and v7

Unsigned Subtract Wide

vsubw_u32Experimentalneon and v7

Unsigned Subtract Wide

vtbl1_p8Experimentalneon,v7

Table look-up

vtbl1_s8Experimentalneon,v7

Table look-up

vtbl1_u8Experimentalneon,v7

Table look-up

vtbl2_p8Experimentalneon,v7

Table look-up

vtbl2_s8Experimentalneon,v7

Table look-up

vtbl2_u8Experimentalneon,v7

Table look-up

vtbl3_p8Experimentalneon,v7

Table look-up

vtbl3_s8Experimentalneon,v7

Table look-up

vtbl3_u8Experimentalneon,v7

Table look-up

vtbl4_p8Experimentalneon,v7

Table look-up

vtbl4_s8Experimentalneon,v7

Table look-up

vtbl4_u8Experimentalneon,v7

Table look-up

vtbx1_p8Experimentalneon,v7

Extended table look-up

vtbx1_s8Experimentalneon,v7

Extended table look-up

vtbx1_u8Experimentalneon,v7

Extended table look-up

vtbx2_p8Experimentalneon,v7

Extended table look-up

vtbx2_s8Experimentalneon,v7

Extended table look-up

vtbx2_u8Experimentalneon,v7

Extended table look-up

vtbx3_p8Experimentalneon,v7

Extended table look-up

vtbx3_s8Experimentalneon,v7

Extended table look-up

vtbx3_u8Experimentalneon,v7

Extended table look-up

vtbx4_p8Experimentalneon,v7

Extended table look-up

vtbx4_s8Experimentalneon,v7

Extended table look-up

vtbx4_u8Experimentalneon,v7

Extended table look-up

vtst_p8Experimentalneon and v7

Signed compare bitwise Test bits nonzero

vtst_s8Experimentalneon and v7

Signed compare bitwise Test bits nonzero

vtst_s16Experimentalneon and v7

Signed compare bitwise Test bits nonzero

vtst_s32Experimentalneon and v7

Signed compare bitwise Test bits nonzero

vtst_u8Experimentalneon and v7

Unsigned compare bitwise Test bits nonzero

vtst_u16Experimentalneon and v7

Unsigned compare bitwise Test bits nonzero

vtst_u32Experimentalneon and v7

Unsigned compare bitwise Test bits nonzero

vtstq_p8Experimentalneon and v7

Signed compare bitwise Test bits nonzero

vtstq_s8Experimentalneon and v7

Signed compare bitwise Test bits nonzero

vtstq_s16Experimentalneon and v7

Signed compare bitwise Test bits nonzero

vtstq_s32Experimentalneon and v7

Signed compare bitwise Test bits nonzero

vtstq_u8Experimentalneon and v7

Unsigned compare bitwise Test bits nonzero

vtstq_u16Experimentalneon and v7

Unsigned compare bitwise Test bits nonzero

vtstq_u32Experimentalneon and v7

Unsigned compare bitwise Test bits nonzero