forked from jonbarron/website
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
executable file
·332 lines (308 loc) · 22 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-148984682-2"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-148984682-2');
</script>
<title>Yihua Huang</title>
<meta name="author" content="Yihua Huang">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/seal_icon.png">
</head>
<body>
<table style="width:100%;max-width:800px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;">
<tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:25%;max-width:25%">
<a href="yihua.jpg"><img style="width:100%;max-width:100%" alt="profile photo" src="yihua.jpg" class="hoverZoomLink"></a>
</td>
<td style="padding:2.5%;width:63%;vertical-align:middle">
<p style="text-align:center">
<name>Yihua Huang</name>
</p>
<p style="text-align:center">
<a href="mailto:huangyihua16@mails.ucas.ac.cn">Email</a>  / 
<a href="https://drive.google.com/file/d/1N4lD1FHbiebmtoVjKmKbcNF6hQXWLXw4/view?usp=sharing">CV</a>  / 
<a href="https://www.linkedin.com/in/yihua-huang-0b2225245/">Linkedin</a>  / 
<a href="https://github.com/yihua7/">Github</a>  / 
<a href="https://scholar.google.com/citations?hl=en&user=zLil53UAAAAJ">Scholar</a>
</p>
<p>
I am a second-year PhD student at the <a href="https://xjqi.github.io/cvmi.html">CVMI Lab</a>, supervised by <a href="https://xjqi.github.io/">Xiaojuan Qi</a>. My research focuses on 3D/4D reconstruction, interaction, simulation, and editing. Prior to this, I completed my master’s degree at the <a href="http://english.ict.cas.cn/">Institute of Computing Technology</a>, part of the <a href="https://english.cas.cn/">Chinese Academy of Sciences</a>, under the supervision of Professor <a href="http://geometrylearning.com/">Lin Gao</a>. I deeply appreciate the valuable guidance and support provided by Prof. Gao during my studies. I have also had the privilege of collaborating closely with Dr. <a href="https://yanpei.me/">Yan-Pei Cao</a> and Professor <a href="https://users.cs.cf.ac.uk/Yukun.Lai/">Yu-Kun Lai</a>, both of whom significantly contributed to my academic growth. Before embarking on my master’s program, I earned my bachelor's degree from the <a href="https://english.ucas.ac.cn/">University of Chinese Academy of Sciences</a>, where I was mentored by the esteemed Professor <a href="http://vipl.ict.ac.cn/people/xlchen/">Xilin Chen</a>. Prof. Chen introduced me to my research field and taught me the foundational principles of conducting research, for which I am sincerely grateful.
<!-- I'm very interested in 3D reconstruction from images, 3D shape analysis, and other 3D vision problems. My research interests also include robotics, slam and federated learning. -->
</p>
<p style="color:magenta">
<!-- <em>I will join <a href="https://xjqi.github.io/">Xiaojuan Qi</a> 's team as a Ph.D student , engaged in the research of 3D reconstruction and lifelong learning. </em> -->
</p>
</td>
</tr>
</tbody>
</table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;">
<tbody>
<tr>
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
<heading>Research</heading>
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[11]  <papertitle>Deformable Radial Kernel Splatting</papertitle>
<br>
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong> </a>,
<a href="https://scholar.google.com/citations?hl=en&user=smnEog0AAAAJ">MingXian Lin</a>,
<a href="https://sunyangtian.github.io/">Yang-Tian Sun</a>,
<a href="https://github.com/ingra14m">Ziyi Yang</a>,
<a href="https://github.com/shawLyu">Xiaoyang Lyu</a><sup></sup>,
<a href="https://yanpei.me/">Yan-Pei Cao</a><sup>#</sup>,
<a href="https://xjqi.github.io/">Xiaojuan Qi</a><sup>#</sup>
<br>
<em>arXiv</em>, 2024  
<br>
<a href="https://arxiv.org/pdf/2412.11752">paper</a> /
<a href="https://yihua7.github.io/DRK-web/">project page</a> /
<a href="https://github.com/yihua7/Deformable-Radial-Kernel-Splatting">code</a>
<br>
We introduce Deformable Radial Kernel (DRK), which extends Gaussian splatting into a more general and flexible framework. Through learnable radial bases with adjustable angles and scales, DRK efficiently models diverse shape primitives while enabling precise control over edge sharpness and boundary curvature
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[10]  <papertitle>SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes</papertitle>
<br>
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong> </a><sup>*</sup>,
<a href="https://sunyangtian.github.io/">Yang-Tian Sun</a><sup>*</sup>,
<a href="https://github.com/ingra14m">Ziyi Yang</a><sup>*</sup>,
<a href="https://github.com/shawLyu">Xiaoyang Lyu</a><sup></sup>,
<a href="https://yanpei.me/">Yan-Pei Cao</a><sup>#</sup>,
<a href="https://xjqi.github.io/">Xiaojuan Qi</a><sup>#</sup>
<br>
<em>IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</em>, 2024  
<br>
<a href="https://arxiv.org/abs/2312.14937">paper</a> /
<a href="https://yihua7.github.io/SC-GS-web/">project page</a> /
<a href="https://github.com/yihua7/SC-GS">code</a>
<br>
We introduce sparse-controlled gaussian splatting to synthesize dynamic novel views. With the learned node graph of sparse control points, real-time editing can be achieved with ARAP deformation by interactive dragging of users.
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[9]  <papertitle>Splatter a Video: Video Gaussian Representation for Versatile Processing</papertitle>
<br>
<a href="https://sunyangtian.github.io/">Yang-Tian Sun</a><sup>*</sup>,
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong> </a><sup>*</sup>,
<a>Lin Ma</a><sup></sup>,
<a href="https://github.com/shawLyu">Xiaoyang Lyu</a><sup></sup>,
<a href="https://yanpei.me/">Yan-Pei Cao</a>,
<a href="https://xjqi.github.io/">Xiaojuan Qi</a><sup>#</sup>
<br>
<em>Conference on Neural Information Processing Systems (NeurIPS)</em>, 2024  
<br>
<a href="https://arxiv.org/abs/2406.13870">paper</a> /
<a href="https://sunyangtian.github.io/spatter_a_video_web/">project page</a> /
<a href="https://github.com/SunYangtian/Splatter_A_Video">code</a>
<br>
We introduce a novel explicit 3D representation—video Gaussian representation—that embeds a video into 3D Gaussians, enabling tracking, consistent video depth and feature refinement, motion and appearance editing, and stereoscopic video generation.
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[8]  <papertitle>Spec-gaussian: Anisotropic view-dependent appearance for 3d gaussian splatting</papertitle>
<br>
<a href="https://github.com/ingra14m">Ziyi Yang</a>,
<a>Xinyu Gao</a>,
<a href="https://sunyangtian.github.io/">Yang-Tian Sun</a>,
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong> </a>,
<a href="https://github.com/shawLyu">Xiaoyang Lyu</a><sup></sup>,
<a>Wen Zhou</a>,
<a>Shaohui Jiao</a>,
<a href="https://xjqi.github.io/">Xiaojuan Qi</a><sup>#</sup>
<a>Xiaogang Jin</a><sup>#</sup>
<br>
<em>Conference on Neural Information Processing Systems (NeurIPS)</em>, 2024  
<br>
<a href="https://ingra14m.github.io/Spec-Gaussian-website/file/Spec-Gaussian-nips24.pdf">paper</a> /
<a href="https://ingra14m.github.io/Spec-Gaussian-website/">project page</a> /
<a href="https://github.com/ingra14m/Spec-Gaussian">code</a>
<br>
We introduce Spec-Gaussian, an approach that utilizes an anisotropic spherical Gaussian (ASG) appearance field instead of SH for modeling the view-dependent appearance of each 3D Gaussian.
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[7]  <papertitle>3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting</papertitle>
<br>
<a href="https://github.com/shawLyu">Xiaoyang Lyu</a><sup></sup>,
<a href="https://sunyangtian.github.io/">Yang-Tian Sun</a>,
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong> </a>,
<a>Xiuzhe Wu</a>,
<a href="https://github.com/ingra14m">Ziyi Yang</a>,
<a>Yilun Chen</a>,
<a>Jiangmiao Pang</a>,
<a href="https://xjqi.github.io/">Xiaojuan Qi</a><sup>#</sup>
<br>
<em>ACM SIGGRAPH 2024 Conference Proceedings (SIGGRAPH)</em>, 2024  
<br>
<a href="https://arxiv.org/abs/2404.00409">paper</a>
<br>
We introduce a differentiable SDF-to-opacity transformation function that converts SDF values into corresponding Gaussians' opacities. This function connects the SDF and 3D Gaussians, allowing for unified optimization and enforcing surface constraints on the 3D Gaussians.
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[6]  <papertitle>NeRF-Texture: Synthesizing Neural Radiance Field Textures</papertitle>
<br>
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong></a>,
<a href="https://yanpei.me/">Yan-Pei Cao</a>,
<a href="https://users.cs.cf.ac.uk/Yukun.Lai/">Yu-Kun Lai</a>,
<a>Ying Shan</a>,
<a href="http://geometrylearning.com/">Lin Gao</a>
<br>
<em>IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE TPAMI), 2024</em>  
<br>
<a href="https://ieeexplore.ieee.org/abstract/document/10489854">paper</a> /
<a href="https://yihua7.github.io/NeRF-Texture-web/">project page</a> /
<a href="https://github.com/yihua7/NeRF-Texture">code</a>
<br>
We propose an algorithm to synthesize NeRF textures on arbitrary manifolds. By using a patch-matching method on curved surfaces, we can smoothly quilt texture patches on mesh surfaces. We create a multi-resolution pyramid for a fast patch-matching process. By incorporating a reflection network, we preserve high-frequency view-dependent features such as highlights and mirror reflections in the final synthesized results.
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[5]  <papertitle>NeRF-Texture: Texture Synthesis with Neural Radiance Fields</papertitle>
<br>
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong></a>,
<a href="https://yanpei.me/">Yan-Pei Cao</a>,
<a href="https://users.cs.cf.ac.uk/Yukun.Lai/">Yu-Kun Lai</a>,
<a>Ying Shan</a>,
<a href="http://geometrylearning.com/">Lin Gao</a>
<br>
<em>ACM SIGGRAPH 2023 Conference Proceedings (SIGGRAPH), 2023</em>  
<br>
<a href="https://dl.acm.org/doi/pdf/10.1145/3588432.3591484">paper</a> /
<a href="https://yihua7.github.io/NeRF-Texture-web/">project page</a> /
<a href="https://github.com/yihua7/NeRF-Texture">code</a>
<br>
We introduce a NeRF-based system to acquire, synthesize, map, and relight textures from real-world textures. A novel coarse-fine disentangling representation is proposed to model meso-structures of textures. Acquired textures are synthesized by an implicit patch-matching algorithm.
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[4]  <papertitle>StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D Mutual Learning</papertitle>
<br>
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong></a>,
<a>Yue He</a>,
<a href="http://people.geometrylearning/yyj/">Yu-Jie Yuan</a>,
<a href="https://users.cs.cf.ac.uk/Yukun.Lai/">Yu-Kun Lai</a>,
<a href="http://geometrylearning.com/">Lin Gao</a>
<br>
<em>IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</em>, 2022  
<br>
<a href="https://arxiv.org/abs/2205.12183">arxiv</a> /
<a href="http://geometrylearning.com/StylizedNeRF/">project page</a> /
<a href="https://github.com/IGLICT/StylizedNeRF">code</a>
<br>
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF to fuse the stylization ability of 2D stylization network with the 3D consistency of NeRF.
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[3]  <papertitle>Learning Critically: Selective Self Distillation in Federated Learning on Non-IID Data</papertitle>
<br>
<a href="https://yutinghe20.github.io/YutingHe/">Yuting He</a>,
<a href="https://people.ucas.ac.cn/~yqchen">Yiqiang Chen</a>,
<a>XiaoDong Yang</a>,
<a>Hanchao Yu</a>,
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong></a>,
<a>Yang Gu</a>
<br>
<em>IEEE Transactions on Big Data (TBD)</em>, 2022  
<br>
<a href="https://www.computer.org/csdl/journal/bd/5555/01/09826416/1EVdvuSiENO">paper</a>
<br>
We propose a Selective Self-Distillation method for Federated learning (FedSSD), which imposes adaptive constraints on the local updates by self-distilling the global model's knowledge and selectively weighting it by evaluating the credibility at both the class and sample level.
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[2]  <papertitle>Neural Radiance Fields from Sparse RGB-D Images for High-Quality View Synthesis</papertitle>
<br>
<a href="http://people.geometrylearning/yyj/">Yu-Jie Yuan</a>,
<a href="https://users.cs.cf.ac.uk/Yukun.Lai/">Yu-Kun Lai</a>,
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong></a>,
<a href="https://www.graphics.rwth-aachen.de/person/3/">Leif Kobbelt</a>,
<a href="http://geometrylearning.com/">Lin Gao</a>,
<br>
<em>IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE TPAMI)</em>, 2022  
<br>
<a href="https://ieeexplore.ieee.org/document/9999509">paper</a> /
<a href="http://geometrylearning.com/rgbdnerf/">project page</a>
<!-- <a href="https://github.com/IGLICT/StylizedNeRF">code</a> -->
<br>
We introduce a novel NeRF reconstruction method using RGB-D inputs from a consumer-level device (iPad), which enables high-quality reconstruction from sparse inputs. Experiments show that the proposed method achieves state-of-the-art novel view synthesis quality in this case of sparse RGB-D inputs.
</td>
</tr>
<tr onmouseout="motionblur_stop()" onmouseover="motionblur_start()">
<td style="padding-top:0px;padding-bottom:10px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
[1]  <papertitle>Multiscale Mesh Deformation Component Analysis with Attention-based Autoencoders</papertitle>
<br>
<a href="http://people.geometrylearning.com/~jieyang/">Jie Yang</a>,
<a href="http://geometrylearning.com/">Lin Gao</a>,
<a href="https://qytan.com/">Qingyang Tan</a>,
<a href="https://yihua7.github.io/website/"><strong>Yihua Huang</strong></a>,
<a>Shihong Xia</a>
<a href="https://users.cs.cf.ac.uk/Yukun.Lai/">Yu-Kun Lai</a>
<br>
<em>IEEE Transactions on Visualization and Computer Graphics (TVCG)</em>, 2021  
<br>
<a href="https://arxiv.org/abs/2012.02459">arxiv</a>
<br>
We propose a novel method to exact multiscale deformation components automatically with a stacked attention-based autoencoder.
</td>
</tr>
<!-- <tr>
<td style="padding-top:0px;padding-bottom:20px;padding-left:20px;padding-right:20px;width:75%;vertical-align:middle">
<heading>Services</heading>
</td>
</tr>
<p>Paper reviewer: CVPR</p> -->
</tbody>
</table>
<table width="100%" align="center" border="0" cellspacing="0" cellpadding="20">
<tbody>
<tr>
<td>
<heading>Services</heading>
<p>
<strong>Internships:</strong> Tencent (2023 Summer) <br>
<strong>Paper reviewer:</strong> CVPR, ICCV, NeurIPS, ICLR, ECCV, TVCG, ACCV, Parcific Graphics, Virtual Reality, Computer & Graphics<br>
<strong>Talk speaker:</strong> Deep Blue College 2023, Graphics And Mixed Environment Seminar (GAMES) 2022<br>
</p>
</td>
</tr>
</tbody>
</table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px">
<br>
<p style="text-align:center;font-size:small;">
Kudos to <a href="https://jonbarron.info/">Dr. Jon Barron</a> for sharing his website template.
</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
</body>
</html>