[ad_1]

I don’t know how OpenMP works, but I presume calling a function with restricted pointer arguments inside a parallel for loop doesn’t work if the objects could be shared by multiple threads? Take the following example of serial code meant to perform a weighted sum across matrix columns:

```
const int n = 10;
const double x[n][n] = {...}; // matrix, containing some numbers
const double w[n] = {...}; // weights, containing some numbers
// my weighted sum function
double mywsum(const double *restrict px, const double *restrict pw, const int n) {
double tmp = 0.0;
for(int i = 0; i < n; ++i) tmp += px[i] * pw[i];
return tmp;
}
double res[n]; // results vector
const double *pw = &w[0]; // creating pointer to w
// loop doing column-wise weighted sum
for(int j = 0; j < n; ++j) {
res[j] = mywsum(&x[0][j], pw, n);
}
```

Now I want to parallelize this loop using OpenMP, e.g.:

```
#pragma omp parallel for
for(int j = 0; j < n; ++j) {
res[j] = mywsum(&x[0][j], pw, n);
}
```

I believe the `*restrict px`

could still be valid as the particular elements pointed to can only be accessed by one thread at a time, but the `*restrict pw`

should cause problems as the elements of `w`

are accessed concurrently by multiple threads, so the `restrict`

clause should be removed here?

[ad_2]