[ad_1]
I have a custom object detection model.
I’m using Interpreter.runForMultipleInputsOutputs
to detect an object on bitmap.
The problem is that at some point I started to receive zeroes at output when using GPU and
fine results with CPU.
An interesting fact that I have no problem with GPU on my test devece and I have this problem on remote device, that is my target device.
What could be the problem?
Here is my code, I have changed an exaple a bit:
var gpuDelegate: GpuDelegate? = if (device == Device.GPU) GpuDelegate() else null
val options = Interpreter
.Options()
.apply {
when (device) {
is Device.CPU -> {
setNumThreads(device.numThreads)
}
Device.GPU -> {
addDelegate(gpuDelegate)
}
}
}
val interpreter: Interpreter = Interpreter(
FileUtil.loadMappedFile(context, MODEL_FILENAME),
options
)
...
val inputTensor = processInputImage(bitmap, inputWidth, inputHeight)
val locations = arrayOf(Array(10) { FloatArray(4) })
val labelIndices = arrayOf(FloatArray(10))
val scores = arrayOf(FloatArray(10))
val outputCount = FloatArray(1)
val outputBuffer = mapOf(
0 to locations,
1 to labelIndices,
2 to scores,
3 to outputCount
)
val detectionRegions = mutableListOf<WeightRegion>()
interpreter.runForMultipleInputsOutputs(arrayOf(inputTensor.buffer), outputBuffer)
gpuDelegate?.close()
interpreter.close()
build.gradle:
android {
...
buildFeatures {
mlModelBinding true
}
}
dependencies {
...
implementation 'org.tensorflow:tensorflow-lite:2.7.0'
implementation 'org.tensorflow:tensorflow-lite-gpu:2.7.0'
implementation 'org.tensorflow:tensorflow-lite-support:0.3.0'
implementation 'org.tensorflow:tensorflow-lite-metadata:0.3.0'
}
[ad_2]