1.
질문 1The first CUDA-compatible Nvidia GPU devices had 256 KB of global memory and now has 2 TB of global memory.
False
the first CUDA cards had at a minimum 256MB of global memory and now have around 4GB on the high-end.
2.
질문 2Which of the following are Nvidia GPU architectures?
This architecture was mostly used from 2008 to 2010.
Ampere
This is the architecture that started to be developed in 2020.
3.
The nvidia-smi CLI tool allows which of the following options and the associated use
-i This allows you to query for a specific Nvidia device based on an assigned id
Correct. An alternative is the --id long form.
-x Outputs in XML format
This is correct, so if you would like to process the output of nvidia-smi as XML this is the option to use.
-L This lists all NVidia GPUs
Correct. you can also use the --list_gpus for similar effect.
4.
질문 4The lspci command was developed by Nvidia and is installed as part of CUDA expressly for the listing of GPU devices.
5.
질문 5Which of the following is an example of a C++ primitive data type?
int
This is one of the canonical primitives.
6.
질문 6
Which highlighted (in bold text) parts of a pageable memory allocation are not correct?
char *words:: input_a = (char *)cudaMalloc(1024 * sizeof(char));
::
Correct. This is used as part of C++'s namespacing convention but what is the correct replacement is the ;(semi-colon).
cudaMalloc
Correct. This is not the correct function as cudaMalloc is used to allocate memory on the GPU, malloc is used in allocating CPU pageable memory.
input_a
Correct. input_a was never declared, so it cannot be assigned a pointer
7.
질문 7
Which of the following functions is used to allocate pinned CPU memory?
cudaMallocHost
This is correct as this functions purpose is to allocate memory for use by CUDA.
8.
질문 8
The purpose of struct allocation is to give a more object-orientted access pattern for host memory.
True
Structs are used to allocate objects that have named properties but do not have the functions that are used in higher-level classes in C++.
9.
질문 9Given the following example device memory allocation:
cudaError_t cudaErr = cudaMalloc(&dev_a,N*sizeof(int));
Which of the following mappings of term to purpose is correct?
cudaErr - This is an instance of the data type used to hold error data
This is correct as the cudaError_t is a data type for holding error data.
N*sizeof(int) - This computes the size of an int array of size N
This is correct as it is a common practice to determine the size of the pointer to an array of integers.
&dev_a - This is an int array pointer
This is correct as the & is used to indicate that the pointer will be passed by reference and the N*sizeof(int) indicates that the pointer is not to a single primitive but an array of ints.
10.
질문 10Which of the following is a correct example of a copy of data from CPU pageable/pinned memory to GPU memory, presuming that h_a is host memory and d_a is device memory.
cudaMemcpy(h_a,d_a,cudaMemcpyHostToDevice);
This is the correct way of copying data from host to device memory
'Coursera' 카테고리의 다른 글
[GPU programming] CUDA Constant and Shared Memory Quiz (0) | 2022.07.27 |
---|---|
[GPU programming] Multidimensional Data and Computation on the GPU Quiz (0) | 2022.07.18 |
[GPU programming] GPU Programming Quiz (0) | 2022.06.27 |
[GPU programming] Nvidia Software and Hardware Quiz (0) | 2022.06.26 |
[GPU programming] c++ Parallel Programming Quiz (0) | 2022.06.23 |