Coursera

[GPU programming] CPU and GPU Global Memory Quiz

star.candy 2022. 7. 27. 18:33
질문 1

T​he first CUDA-compatible Nvidia GPU devices had 256 KB of global memory and now has 2 TB of global memory.

False

t​he first CUDA cards had at a minimum 256MB of global memory and now have around 4GB on the high-end.

 

질문 2

W​hich of the following are Nvidia GPU architectures?

T​esla

T​his architecture was mostly used from 2008 to 2010.

A​mpere

T​his is the architecture that started to be developed in 2020.

 

 

T​he nvidia-smi CLI tool allows which of the following options and the associated use


-​i This allows you to query for a specific Nvidia device based on an assigned id

C​orrect. An alternative is the --id long form.

-​x Outputs in XML format

T​his is correct, so if you would like to process the output of nvidia-smi as XML this is the option to use.

-​L This lists all NVidia GPUs

C​orrect. you can also use the --list_gpus for similar effect.

 

질문 4

T​he lspci command was developed by Nvidia and is installed as part of CUDA expressly for the listing of GPU devices.

 
False
T​his tool is used to list all connected PCI devices on any linux machine that it is installed on.
 
질문 5

W​hich of the following is an example of a C++ primitive data type?

i​nt

T​his is one of the canonical primitives.

 

질문 6

W​hich highlighted (in bold text) parts of a pageable memory allocation are not correct?

char *words:: input_a = (char *)cudaMalloc(1024 * sizeof(char));

:​:

Correct. T​his is used as part of C++'s namespacing convention but what is the correct replacement is the ;(semi-colon).

c​udaMalloc

Correct. T​his is not the correct function as cudaMalloc is used to allocate memory on the GPU, malloc is used in allocating CPU pageable memory.

input_a

C​orrect. input_a was never declared, so it cannot be assigned a pointer 

질문 7

W​hich of the following functions is used to allocate pinned CPU memory?

c​udaMallocHost

T​his is correct as this functions purpose is to allocate memory for use by CUDA.

질문 8

T​he purpose of struct allocation is to give a more object-orientted access pattern for host memory.

True

S​tructs are used to allocate objects that have named properties but do not have the functions that are used in higher-level classes in C++.

 

질문 9

G​iven the following example device memory allocation:

c​udaError_t cudaErr = cudaMalloc(&dev_a,N*sizeof(int));

W​hich of the following mappings of term to purpose is correct?

c​udaErr - This is an instance of the data type used to hold error data

T​his is correct as the cudaError_t is a data type for holding error data.

N​*sizeof(int) - This computes the size of an int array of size N

T​his is correct as it is a common practice to determine the size of the pointer to an array of integers.

&​dev_a - This is an int array pointer

T​his is correct as the & is used to indicate that the pointer will be passed by reference and the N*sizeof(int) indicates that the pointer is not to a single primitive but an array of ints.

 

질문 10

W​hich of the following is a correct example of a copy of data from CPU pageable/pinned memory to GPU memory, presuming that h_a is host memory and d_a is device memory.

c​udaMemcpy(h_a,d_a,cudaMemcpyHostToDevice);

T​his is the correct way of copying data from host to device memory