Metal Compute versus ARM Neon
up vote
0
down vote
favorite
I was considering migrating my current Neon (vector-processing instruction set for the ARM) code to Metal but after running the HelloCompute sample code (that demonstrates how to perform data-parallel computations using the GPU), the GPU seems much slower than using the CPU.
The HelloCompute project takes 13ms on a iPhone 5S to perform this very basic kernel on a 512 x 512 RGBA texture.
{
half4 inColor = inTexture.read(gid);
outTexture.write(inColor, gid);
}
In comparaison, my Neon code takes less than 1ms!!!
GPU should not be at least faster than the CPU?
metal neon
add a comment |
up vote
0
down vote
favorite
I was considering migrating my current Neon (vector-processing instruction set for the ARM) code to Metal but after running the HelloCompute sample code (that demonstrates how to perform data-parallel computations using the GPU), the GPU seems much slower than using the CPU.
The HelloCompute project takes 13ms on a iPhone 5S to perform this very basic kernel on a 512 x 512 RGBA texture.
{
half4 inColor = inTexture.read(gid);
outTexture.write(inColor, gid);
}
In comparaison, my Neon code takes less than 1ms!!!
GPU should not be at least faster than the CPU?
metal neon
That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.
– MoDJ
Nov 20 at 7:16
Could your test unintentionally be limited to the screen refresh rate?
– Rhythmic Fistman
Nov 20 at 22:00
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I was considering migrating my current Neon (vector-processing instruction set for the ARM) code to Metal but after running the HelloCompute sample code (that demonstrates how to perform data-parallel computations using the GPU), the GPU seems much slower than using the CPU.
The HelloCompute project takes 13ms on a iPhone 5S to perform this very basic kernel on a 512 x 512 RGBA texture.
{
half4 inColor = inTexture.read(gid);
outTexture.write(inColor, gid);
}
In comparaison, my Neon code takes less than 1ms!!!
GPU should not be at least faster than the CPU?
metal neon
I was considering migrating my current Neon (vector-processing instruction set for the ARM) code to Metal but after running the HelloCompute sample code (that demonstrates how to perform data-parallel computations using the GPU), the GPU seems much slower than using the CPU.
The HelloCompute project takes 13ms on a iPhone 5S to perform this very basic kernel on a 512 x 512 RGBA texture.
{
half4 inColor = inTexture.read(gid);
outTexture.write(inColor, gid);
}
In comparaison, my Neon code takes less than 1ms!!!
GPU should not be at least faster than the CPU?
metal neon
metal neon
asked Nov 20 at 2:47
Yoshi
618
618
That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.
– MoDJ
Nov 20 at 7:16
Could your test unintentionally be limited to the screen refresh rate?
– Rhythmic Fistman
Nov 20 at 22:00
add a comment |
That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.
– MoDJ
Nov 20 at 7:16
Could your test unintentionally be limited to the screen refresh rate?
– Rhythmic Fistman
Nov 20 at 22:00
That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.
– MoDJ
Nov 20 at 7:16
That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.
– MoDJ
Nov 20 at 7:16
Could your test unintentionally be limited to the screen refresh rate?
– Rhythmic Fistman
Nov 20 at 22:00
Could your test unintentionally be limited to the screen refresh rate?
– Rhythmic Fistman
Nov 20 at 22:00
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.
NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.
AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.
And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.
NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.
AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.
And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.
add a comment |
up vote
1
down vote
GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.
NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.
AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.
And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.
add a comment |
up vote
1
down vote
up vote
1
down vote
GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.
NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.
AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.
And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.
GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.
NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.
AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.
And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.
edited Nov 20 at 3:22
answered Nov 20 at 3:17
Jake 'Alquimista' LEE
3,34111219
3,34111219
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53385495%2fmetal-compute-versus-arm-neon%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.
– MoDJ
Nov 20 at 7:16
Could your test unintentionally be limited to the screen refresh rate?
– Rhythmic Fistman
Nov 20 at 22:00