A Group Work of Microsoft Fabric and AI Learning Hackathon See our WebPage for more details.
- User Input Parser:
userInputParser.ipynb
get taskID - GPT4o Talker:
GPT4otalker.ipynb
get keywords from the user input - URL Fetcher:
urlFetch.ipynb
get image urls by the keywords - Image Fetcher:
imageFetcher.ipynb
download the images and save to the lakehouse - Object Detector:
GroundingDINO_with_Segment_Anything.ipynb
get the object detection results- Detect objects in images based on text prompts
- Generate segmentation masks for the detected objects
- Process and crop images based on the detections
- Use the clip to get the image description
- Use SAM to generate the segmentation masks
- Resizer:
Resize.ipynb
resize the images, update the pic table - Packer:
Packing.ipynb
pack the images into a single image, update the pic table - Email Sender: get the pic package and send to the user with fabric pipeline
-
Metadata:
MetaTableNotebook.ipynb
define the metadata of the imagespic
: store the image urls, descriptions, and other informationtask
: store the task information
-
Database:
sqlite_db.py
define the database of the task and the pictask
: store the task informationpic
: store the image urls, descriptions, and other information to check the database, runsqlite3 tasks.db
SELECT * FROM tasks;
to end the database viewer, run .exit
- in the main.py, we initialize processes for each kind of task.
- for each kind of task, we use a Queue to store the tasks to be processed.
- each process will synchronously get the task from one queue
- each process will asynchronously process the task and put the result into another queue
- the database will be updated atomically by the task processor. (TODO validate the atomicity)