Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Where to find logging for preprocessing Custom model #57

Open
fulankun1412 opened this issue Jun 3, 2023 · 2 comments
Open

Where to find logging for preprocessing Custom model #57

fulankun1412 opened this issue Jun 3, 2023 · 2 comments

Comments

@fulankun1412
Copy link

Trying to create a custom model using Ultralytics' YoloV8, I got this message while using Postman for testing my endpoint.

image

header:
image

body payload:

{
"imgString": "base64encodedImage"
}

The preprocess input would be like this:

def preprocess(self, body: dict, state: dict, collect_custom_statistics_fn=None) -> Any:
        print(body)
        base64String = body.get("imgString")
        print(base64String)
        self._image = cv2.imdecode(np.frombuffer(base64.b64decode(base64String), np.uint8), cv2.IMREAD_COLOR)
        self._scalingH, self._scalingW = self._image.shape[0]/imgSize, self._image.shape[1]/imgSize
        data = cv2.resize(self._image, (imgSize, imgSize))
        return data

The process

def process(
            self,
            data: Any,
            state: dict,
            collect_custom_statistics_fn: Optional[Callable[[dict], None]],
    ) -> Any:  # noqa
        

        # this is where we do the heavy lifting, i.e. run our model.
        results = self._model.predict(data, imgsz = imgSize,
                                      conf = configModel["model-config"]["conf"], iou = configModel["model-config"]["iou"],
                                      save = configModel["model-config"]["save-mode"], save_conf = configModel["model-config"]["save-mode"],
                                      save_crop = configModel["model-config"]["save-mode"], save_txt = configModel["model-config"]["save-mode"],
                                      device = configModel["model-config"]["device-mode"])
        return results

and the postprocess like this.

def postprocess(self, data: Any, state: dict, collect_custom_statistics_fn=None) -> dict:
        results = data
        classes = results[0].names

        imgDict = {}
        finalDict = {}
        dictDataEntity = {}
        for boxes in results[0].boxes:
            for box in boxes:
                labelNo = int(box.cls)

                x1 = int(box.xyxy[0][0]*self._scalingW)
                y1 = int(box.xyxy[0][1]*self._scalingH)
                x2 = int(box.xyxy[0][2]*self._scalingW)
                y2 = int(box.xyxy[0][3]*self._scalingH)

                tempCrop = self._image[y1:y2, x1:x2]

                imgDict.update({labelNo:tempCrop})

        orderedDict = OrderedDict(sorted(imgDict.items()))
        for key, value in orderedDict.items():
            for classKey, classValue in classes.items(): 
                if key == classKey:
                    finalDict[classValue] = value

        img_v_resize = hconcat_resize(finalDict.values(),imgDelimiter) #
        gray_imgResize = get_grayscale(img_v_resize) # call the grayscaling function
        success, encoded_image = cv2.imencode('.jpg', gray_imgResize) # save the image in memory
        BytesImage = encoded_image.tobytes()
        a = cv2.resize(img_v_resize, (960, 540))
        #cv2.imwrite("test.jpg", gray_imgResize)

        text_response = get_text_response_from_path(BytesImage)

        #========== POST PROCESSING ================#
        dataEntity = text_response[0].description.strip() # show only the description info from gvision
        a = [i.split("\n") for i in dataEntity.split('PEMISAH') if i]
        

        value = []
        value.clear()
        for i in a:
            c = [d for d in i if d]
            listToStr = ' '.join([str(elem) for elem in c])
            stripListToStr = listToStr.strip()
            value.append(stripListToStr)

        i = 0

        for entity in classes.values():
            dictDataEntity[entity] = value[i]
            i+=1
            if len(value) == i:
                break

        for label in classes.values():
            if label not in dictDataEntity.keys():
                dictDataEntity[label] = "-"

        return dict(predict=dictDataEntity.tolist())

the problem is I want to check the logging to find which codes having problems from that and I can't find where the log is for preprocessing. because I'm pretty sure the problem is one of my codes but I cant find from what line it is. or is there any way to write the log in the docker log or terminal. Thanks

@jkhenning
Copy link
Member

Hi @fulankun1412,

This basically means it failed to load you preprocess Class. I'm assuming your def load() function failed?
This is an optional function, you do not have to load it. Can you post the entire preprocess code?

@fulankun1412
Copy link
Author

Yes certainly, but this kinda a long code i presume.

from typing import Any, Callable, Optional

import numpy as np
from imread_from_url import imread_from_url
from ultralytics import YOLO
import cv2
from collections import OrderedDict
import numpy as np
from google.cloud import vision
from google.cloud.vision_v1 import types
import base64

from clearml import Task, InputModel

configModel = {
  "clearml-training-project-config": {
    "project-name": "Take Home Test Model-eFishery",
    "task-name": "OCR Detection and Text Extraction",
    "id": "*************",
    "task-type": "inference"
  },
  "clearml-serving-project-config": {
    "project-name": "efishery Take Home Test Serving DevOps",
    "task-name": "OCR Detection and Text Extraction Serving",
    "id": True,
    "task-type": "inference"
  },
  "model-config": {
    "YOLO-model": "OCR-eFishery",
    "published": True,
    "tags": [],
    "delimiter": "https://github.com/fulankun1412/OCR-YoloV8-Lanang-efishery/blob/main/delimiter/delimiter6.png?raw=true",
    "vision-key": "development-356407-6002a5991254.json",
    "image-size": 640,
    "device-mode": "cpu",
    "save-mode": False,
    "conf": 0.25,
    "iou": 0.25
  }
}
gcv_api_key_path = {
  "type": "service_account",
  "project_id": "*****************************",
  "private_key_id": "**********************************",
  "private_key": "*******************************",
  "client_email": "**************************",
  "client_id": "************************************",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "***********************************"
}


inputModel = InputModel(project=configModel["clearml-training-project-config"]["project-name"], name=configModel["model-config"]["YOLO-model"],
                        only_published=configModel["model-config"]["published"], tags=configModel["model-config"]["tags"])

pathToModel = inputModel.get_local_copy()
imgDelimiter = imread_from_url(configModel["model-config"]["delimiter"])

imgSize = configModel["model-config"]["image-size"]

def hconcat_resize(img_list,img_delimiter, interpolation 
                   = cv2.INTER_CUBIC):
    h_max = 0
    w_total = 10
      # take minimum width
    h_max = max(img.shape[0]
                for img in img_list)
    
    h_max2 = max(h_max, img_delimiter.shape[0])

    
    for img in img_list:
        w_total += img.shape[1] + 5
        w_total += img_delimiter.shape[1] + 5
    
    img_backgroud = np.zeros((h_max2, w_total,3), dtype=np.uint8) ## create base background image with max width and total height of all image in img_list
    img_backgroud[:,:] = (255,255,255) ## colour of the background

    current_x = 0
    for img in img_list:

        # add an image to the final array and increment the y coordinate
        img_backgroud[:img.shape[0],current_x:img.shape[1]+current_x,:] = img
        current_x = current_x + img.shape[1] + 5

        # add a delimiter image to each cropped image
        img_backgroud[:img_delimiter.shape[0],current_x:img_delimiter.shape[1]+current_x,:] = img_delimiter
        current_x = current_x + img_delimiter.shape[1] + 5

    return img_backgroud

def get_grayscale(image):
    return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

#====================== GOOGLE VISION #======================
client = vision.ImageAnnotatorClient.from_service_account_file(gcv_api_key_path) # VISION API KEY PATH

def get_text_response_from_path(BytesImage):

    output = None
    try:
        image = types.Image(content=BytesImage)
    except ValueError:
        output = "Cannot Read Input File"
        return output

    text_response = client.text_detection(image=image, image_context={"language_hints": ["id"]})
    text = text_response.text_annotations

    return text
#====================== END OF GOOGLE VISION ======================

# Notice Preprocess class Must be named "Preprocess"
class Preprocess(object):
    """
    Notice the execution flows is synchronous as follows:

    1. RestAPI(...) -> body: dict
    2. preprocess(body: dict, ...) -> data: Any
    3. process(data: Any, ...) -> data: Any
    4. postprocess(data: Any, ...) -> result: dict
    5. RestAPI(result: dict) -> returned request
    """
    def __init__(self):
        # set internal state, this will be called only once. (i.e. not per request)
        self._model = None

    def load(self, local_file_name: str) -> Optional[Any]:  # noqa
        # Load Custom Ultralytics YOLOv8
        self._model = YOLO(pathToModel)

    def preprocess(self, body: dict, state: dict, collect_custom_statistics_fn=None) -> Any:
        print(body)
        base64String = body.get("imgString")
        print(base64String)
        self._image = cv2.imdecode(np.frombuffer(base64.b64decode(base64String), np.uint8), cv2.IMREAD_COLOR)
        self._scalingH, self._scalingW = self._image.shape[0]/imgSize, self._image.shape[1]/imgSize
        data = cv2.resize(self._image, (imgSize, imgSize))
        return data
    
    def process(
            self,
            data: Any,
            state: dict,
            collect_custom_statistics_fn: Optional[Callable[[dict], None]],
    ) -> Any:  # noqa
        results = self._model.predict(data, imgsz = imgSize,
                                      conf = configModel["model-config"]["conf"], iou = configModel["model-config"]["iou"],
                                      save = configModel["model-config"]["save-mode"], save_conf = configModel["model-config"]["save-mode"],
                                      save_crop = configModel["model-config"]["save-mode"], save_txt = configModel["model-config"]["save-mode"],
                                      device = configModel["model-config"]["device-mode"])
        return results

    def postprocess(self, data: Any, state: dict, collect_custom_statistics_fn=None) -> dict:
        results = data
        classes = results[0].names

        imgDict = {}
        finalDict = {}
        dictDataEntity = {}
        for boxes in results[0].boxes:
            for box in boxes:
                labelNo = int(box.cls)

                x1 = int(box.xyxy[0][0]*self._scalingW)
                y1 = int(box.xyxy[0][1]*self._scalingH)
                x2 = int(box.xyxy[0][2]*self._scalingW)
                y2 = int(box.xyxy[0][3]*self._scalingH)

                tempCrop = self._image[y1:y2, x1:x2]

                imgDict.update({labelNo:tempCrop})

        orderedDict = OrderedDict(sorted(imgDict.items()))
        for key, value in orderedDict.items():
            for classKey, classValue in classes.items(): 
                if key == classKey:
                    finalDict[classValue] = value

        img_v_resize = hconcat_resize(finalDict.values(),imgDelimiter) #
        gray_imgResize = get_grayscale(img_v_resize) # call the grayscaling function
        success, encoded_image = cv2.imencode('.jpg', gray_imgResize) # save the image in memory
        BytesImage = encoded_image.tobytes()
        a = cv2.resize(img_v_resize, (960, 540))
        #cv2.imwrite("test.jpg", gray_imgResize)

        text_response = get_text_response_from_path(BytesImage)

        #========== POST PROCESSING ================#
        dataEntity = text_response[0].description.strip() # show only the description info from gvision
        a = [i.split("\n") for i in dataEntity.split('PEMISAH') if i]
        

        value = []
        value.clear()
        for i in a:
            c = [d for d in i if d]
            listToStr = ' '.join([str(elem) for elem in c])
            stripListToStr = listToStr.strip()
            value.append(stripListToStr)

        i = 0

        for entity in classes.values():
            dictDataEntity[entity] = value[i]
            i+=1
            if len(value) == i:
                break

        for label in classes.values():
            if label not in dictDataEntity.keys():
                dictDataEntity[label] = "-"

        return dict(predict=dictDataEntity)

Maybe there's a problem with my code but I certainly followed tutorial for the custom model though I didn't use joblib module for load model

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants