Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Incomplete projec #3

Merged
merged 33 commits into from
Feb 23, 2017
Merged
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
4842c05
Update README.md
Snigf12 Feb 10, 2017
9630124
Update README.md
Snigf12 Feb 22, 2017
6eefd1b
Update README.md
Snigf12 Feb 22, 2017
9a4472a
Add files via upload
Snigf12 Feb 22, 2017
2c8029b
Rename SerialSistemaFinal.py to WithKinect/SerialSistemaFinal.py
Snigf12 Feb 22, 2017
f9901a1
Rename buscar_pelotasVN_LaplaceLab.py to WithKinect/buscar_pelotasVN_…
Snigf12 Feb 22, 2017
9e18b5d
Create Readme.md
Snigf12 Feb 22, 2017
b7ed3d4
Delete Readme.md
Snigf12 Feb 22, 2017
1c1c6d6
Add files via upload
Snigf12 Feb 22, 2017
78f65cf
Update README.md
Snigf12 Feb 22, 2017
c7c2dac
Update README.md
Snigf12 Feb 22, 2017
3a309c8
Create README.md
Snigf12 Feb 22, 2017
8145cbd
Rename buscar_pelotasVN_LaplaceLab.py to buscar_pelotasVN_Lab.py
Snigf12 Feb 22, 2017
098b4d7
Update SerialSistemaFinal.py
Snigf12 Feb 22, 2017
10eb8f8
Rename SerialSistemaFinal.py to SistemaFinal.py
Snigf12 Feb 22, 2017
c0fd547
Update README.md
Snigf12 Feb 22, 2017
5286f93
Update README.md
Snigf12 Feb 22, 2017
c127ed5
Update README.md
Snigf12 Feb 22, 2017
1bbc823
Delete Output.PNG
Snigf12 Feb 22, 2017
c0de53f
Rename WithKinect/README.md to Project/README.md
Snigf12 Feb 22, 2017
b6a297e
Rename WithKinect/SistemaFinal.py to Project/SistemaFinal.py
Snigf12 Feb 22, 2017
9a84b52
Rename WithKinect/buscar_pelotasVN_Lab.py to WithKinect/Project/busca…
Snigf12 Feb 22, 2017
055b5b7
Rename WithKinect/Project/buscar_pelotasVN_Lab.py to Project/buscar_p…
Snigf12 Feb 22, 2017
9a72a4f
Add files via upload
Snigf12 Feb 22, 2017
98fe0b4
Add files via upload
Snigf12 Feb 22, 2017
3de1fd2
Rename DetectorVN_LaplaceLabPruebaEscritorio.py to Testing/DetectorVN…
Snigf12 Feb 22, 2017
4c56215
Rename DetectorVN_LaplaceLabPruebaEscritorio.py to DetectorVN_LabPrue…
Snigf12 Feb 22, 2017
7170c32
Create README.md
Snigf12 Feb 23, 2017
98e34f8
Add files via upload
Snigf12 Feb 23, 2017
848f88d
Update DetectorVN_LabPruebaEscritorio.py
Snigf12 Feb 23, 2017
f10f774
Update DetectorVN_LabPruebaEscritorio.py
Snigf12 Feb 23, 2017
f20ec5b
Add files via upload
Snigf12 Feb 23, 2017
3ed33aa
Update README.md
Snigf12 Feb 23, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added Project/Output.PNG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 6 additions & 0 deletions Project/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
The file "buscar_pelotasVN_LaplaceLab.py" is a function with the artificial vision system.

This function returns [c1, c2, numx, numy] that represents the color (c1 -> Orange, c2 -> Green), and the coordinates from the top side of the Kinect sensor (numx and numy).

As shown in the right image "Output.png" the x coordinate is represented by the numx.
the y coordinate is represented by the numy.
44 changes: 44 additions & 0 deletions Project/SistemaFinal.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# -*- coding: utf-8 -*-
import RPi.GPIO
from numpy import array
from buscar_pelotasVN_Lab import*
import time

# Relacion de los pines GPIO Numero
RPi.GPIO.setmode(RPi.GPIO.BOARD)

#Configuracion de pines de salida
# Salida serial
RPi.GPIO.setup(36,RPi.GPIO.OUT)

# Listo Raspberry
RPi.GPIO.setup(38,RPi.GPIO.OUT)

#Configuracion de pines de entrada
#Puede recibir datos: Vex Arm Cortex
RPi.GPIO.setup(40,RPi.GPIO.IN)

#while (c1 is 0) and (c2 is 0):
#while True:


while True:
# Call the vision system:
# c1 -> bool, if True then target is orange
# c2 -> bool, if True then target is green
# numx -> float x distance from sensor in cm (horizontal distance)
# numy -> float y distance from sensor in cm (depth distance)
c1,c2,numx,numy=buscar_pelotasVN()
#Convert to digital values
if numy > 0:
print('numx',numx,'numy',numy)
#Convierto el valor entre 0 y 255 donde 2 m
#es el máximo valor en metros de ym
numy = int(255*numy/2)

print('Orange',c1,'Green', c2,'numx [cm]',numx,'numy [cm]',numy)



except KeyboardInterrupt:
RPi.GPIO.cleanup()
195 changes: 195 additions & 0 deletions Project/buscar_pelotasVN_Lab.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
# -*- coding: utf-8 -*-
# Se importan las librerías a usar
from freenect import*
from numpy import*
from cv2 import*
from time import*

def buscar_pelotasVN(): #Función principal para llamar desde programa principal
#para transmisión serial a CORTEX

#Funcion de Adquisicion RGB kinect
def frame_RGB():
array,_ = sync_get_video()
array = cvtColor(array,COLOR_RGB2BGR)
return array

#Funcion para adquisicion de profundidad (depth) Kinect
def frame_depth():
array,_ = sync_get_depth()
return array

#Función que retorna imagen binaria donde lo verde es blanco
#y el resto es negro
def filtLAB_Verde(img):
lab = cvtColor(img, COLOR_BGR2Lab)
# pongo los valores verdes para hacer la mascara
#verde_bajo = array([80, 132, 0]) -> im1
#verde_alto = array([244, 153, 110]) -> im1
#verde_bajo = array([52, 141, 21]) -> im2
#verde_alto = array([196, 156, 94]) -> im2
verde_bajo = array([20, 76, 132])
verde_alto = array([240, 121, 215])

mascara = inRange(lab, verde_bajo, verde_alto)

er = ones((7,7),uint8) #matriz para erosion

dil = array([[0,0,0,1,0,0,0],
[0,1,1,1,1,1,0],
[0,1,1,1,1,1,0],
[1,1,1,1,1,1,1],
[0,1,1,1,1,1,0],
[0,1,1,1,1,1,0],
[0,0,0,1,0,0,0]],uint8) #matriz para dilatacion

mascara = erode(mascara,er,iterations = 1) #aplico erosion
mascara = dilate(mascara,dil,iterations = 1) #aplico dilatacion
return mascara

def filtLAB_Naranja(img):
lab = cvtColor(img, COLOR_BGR2Lab)
# pongo los valores de rango naranja para hacer la máscara
#naranja_bajo = array([51, 158, 69]) -> im1
#naranja_alto = array([193, 202, 112]) -> im1
#naranja_bajo = array([44, 166, 71]) -> im2
#naranja_alto = array([170, 205, 106]) -> im2
naranja_bajo = array([20, 136, 152])
naranja_alto = array([235, 192, 198])

mascara = inRange(lab, naranja_bajo, naranja_alto)

er = ones((7,7),uint8) #matriz para erosion

dil = array([[0,0,0,1,0,0,0],
[0,1,1,1,1,1,0],
[0,1,1,1,1,1,0],
[1,1,1,1,1,1,1],
[0,1,1,1,1,1,0],
[0,1,1,1,1,1,0],
[0,0,0,1,0,0,0]],uint8) #matriz para dilatacion

# matriz para erosión y dilación
mascara = erode(mascara,er,iterations = 1) #aplico erosión
mascara = dilate(mascara,dil,iterations = 2)#aplico dilatacion
return mascara



#Variables para retornar
#resultado=[c1, c2, x1, x2, x3, x4, x5, x6, x7, x8, y1, y2, y3, y4, y5, y6, y7, y8]
# Color_|________Coordenada_X_(xm)_______|_____Coordenada_Y_(ym)________|
# Arreglo con la información que se envía de manera serial



#Parte principal
# init=time() #medir tiempo

frame = frame_RGB() #leo frame
depth = frame_depth() #leo profundidad depth
depth = resize(depth,(0,0),fx=0.5, fy=0.5)

mascaraV = resize(frame, (0,0), fx=0.5, fy=0.5)
mascaraN = mascaraV
frame = mascaraV
frame = medianBlur(frame,3)

color=time()
mascaraV = filtLAB_Verde(frame)
mascaraN = filtLAB_Naranja(frame)

# tc=time()-color #tiempo de filtro de color

#Encuentro los círculos que estén en detección de bordes
circuloV = HoughCircles(mascaraV,HOUGH_GRADIENT, 1, 40, param1=60,
param2=24,minRadius=0,maxRadius=0)

circuloN = HoughCircles(mascaraN,HOUGH_GRADIENT, 1, 40, param1=60,
param2=24,minRadius=0,maxRadius=0)

#Para obtener la distancia depV y depN se utilizó la información de esta
#página: https://openkinect.org/wiki/Imaging_Information (Agosto 18)
#Esa regresión se le hicieron modificaciones para disminuir el error
#hallando una aproximación de la forma 1/(Bx+C), donde x es el valor
#en bytes obtenido por el sensor

#Para la alineación
cteX=9
cteY=9 #Valores alineación RGB y Depth
#circle(rgb, (80-cteX,50+cteY),40,(0,0,255),5)

centimg = round(frame.shape[1]/2) #centro de la imagen donde son 0°
#horizontal
centVert= round(frame.shape[0]/2) #centro vertical

#Si encontro al menos un ciculo
if circuloV is not None:
circuloV = circuloV.astype("int")
xV = circuloV[0,0,0]
xVd=xV + cteX
yV = circuloV[0,0,1]
yVd=yV + cteY
verde=True
if xVd >= frame.shape[1]:
xVd = 319
if yVd >= frame.shape[0]:
yVd = 239
#para obtener dato es en coordenada (y,x)->(480x640)
depV = 1/(depth[yVd,xVd]*(-0.0028642) + 3.15221)
depV = round(depV,4) #cuatro cifras decimales
if depV < 0:
depV=0
#depV = ((4-0.8)/2048)*(depth[xVd,yVd]+1)+0.8 aprox propia
else:
verde = False

if circuloN is not None:
circuloN = circuloN.astype("int")
xN = circuloN[0,0,0]
xNd=xN + cteX
yN = circuloN[0,0,1]
yNd=yN + cteY
naranja=True
if xNd >= frame.shape[1]:
xNd = 319
if yNd >= frame.shape[0]:
yNd = 239

#para obtener dato es en coordenada (y,x)->(480x640)
depN = 1/(depth[yNd,xNd]*(-0.0028642) + 3.15221)
depN = round(depN,4) #cuatro cirfras decimales
if depN < 0:
depN=0
else:
naranja = False

if naranja or (verde and naranja):
c1,c2=1,0
bethaN = abs(centVert - yNd)*0.17916 #0.17916 son °/Px en vertical (43°/240)
bethaN = (bethaN*pi)/180
depN = depN*cos(bethaN) # centro valor vertical para ubicar la distancia en 0° Vertical
alphaN = (xNd - centimg)*0.1781 #0.1781 son los grados por pixel (°/px) 320 x 240
alphaN = (alphaN*pi)/180 # en radianes
xm = depN*sin(alphaN)
ym = depN*cos(alphaN)
elif verde and (not naranja):
c1,c2=0,1
bethaV = abs(centVert - yVd)*0.17916 #0.17916 son °/Px en vertical (43°/240)
bethaV = (bethaV*pi)/180
depV = depV*cos(bethaV) # centro valor vertical para ubicar la distancia en 0° Vertical
alphaV = (xVd - centimg)*0.1781 #0.1781 son los grados por pixel (°/px)
alphaV = (alphaV*pi)/180 # en radianes
xm = depV*sin(alphaV)
ym = depV*cos(alphaV)
else:
c1,c2=0,0
xm,ym=0,0
t=time()-init
## imshow('VERDE',mascaraV)
## waitKey(1)
## imshow('NARANJA',mascaraN)
## waitKey(1)
print('FIN',t,'EDGE',te,'COLOR',tc)
print(c1,c2,xm,ym)
return c1,c2,xm,ym
18 changes: 13 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -2,13 +2,21 @@
Spheres Recognition Color-Depth

Hi there! This is my first project,
First version

The first version will be uploaded in November of 2016
Will be a program developed with python, Kinect and Raspberry Pi 1 B+
to recognize spheres and their position on coordinates x, y, in cm respect the position of the Kinect Sensor.
This is an artifitial vision system for robotics application, developed with Python and OpenCV, acquiring the images with a Kinect sensor and processing them with a Raspberry Pi 3 Model B.

Recognize only two colors (orange and green).
Will be used xBox360 Kinect Sensor - 1414
The system recognizes spheres and their position on coordinates x, y, in cm respect the position of the Kinect Sensor.

Recognizes only two colors (orange and green).
The Kinect sensor used is the xBox360 Kinect Sensor - 1414

1. Install Raspbian on your Raspberry Pi - https://www.raspberrypi.org/downloads/
2. Install the OpenCV library for Python - http://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/
3. Install the Numpy library for python - pip install numpy
4. Install libfreenect to be able to use Kinect sensor - Nice tutorial -> https://naman5.wordpress.com/2014/06/24/experimenting-with-kinect-using-opencv-python-and-open-kinect-libfreenect/ AND For more information about the OpenKinect community -> https://openkinect.org/wiki/Main_Page

For finding spheres, this system uses the HougCircle method. If the green and orange colors are not well filtered, you can change the ranges of the colors desired, it is used the Lab colorspace.

Thanks,
Snigf12
Loading