Our dataset contains 20M images created by pipeline: (A) We collect around 1 million CAD models provided by world-leading furniture manufacturers. SMF is applied to register and perform expression transfer on scans captured in-the-wild with an iPhone depth camera represented either as meshes or point clouds. We use statistics from concentric spherical shells to define representative features and resolve the point order ambiguity, allowing traditional convolution to perform efficiently on such features. MINOS leverages large datasets of complex 3D environments and supports flexible configuration of multimodal sensor suites. Princeton Shape Benchmark (2003) [Link] This repo is derived from my study notes and will be used as a place for triaging new research papers. CoMA introduces mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. You signed in with another tab or window. Matterport3D: Learning from RGB-D Data in Indoor Environments (2017) [Link] This platform provides RGB from 1000 point clouds, as well as multimodal sensor data: surface normal, depth, and for a fraction of the spaces, semantics object annotations. ScanNet (2017) [Link] An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. Compared to other 3D object datasets, our proposed dataset contains an assembling sequence of unit primitives. Deformable Shape Completion with Graph Convolutional Autoencoders (2018 CVPR) [Paper], Global-to-Local Generative Model for 3D Shapes (SIGGRAPH Asia 2018) [Paper][Code], ALIGNet: Partial-Shape Agnostic Alignment via Unsupervised Learning (TOG 2018) [Paper] [Code], GAL: Geometric Adversarial Loss for Single-View 3D-Object Reconstruction (2018) [Paper], Visual Object Networks: Image Generation with Disentangled 3D Representation (2018) [Paper], Learning to Infer and Execute 3D Shape Programs (2019)) [Paper], Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) [Paper], Learning Embedding of 3D models with Quadric Loss (BMVC 2019) [Paper] [Code], CompoNet: Learning to Generate the Unseen by Part Synthesis and Composition (ICCV 2019) [Paper][Code]. Parsing of Large-scale 3D Point Clouds (2017) [Paper], Semantic Segmentation of Indoor Point Clouds using Convolutional Neural Networks (2017) [Paper], SEGCloud: Semantic Segmentation of 3D Point Clouds (2017) [Paper], Large-Scale 3D Shape Reconstruction and Segmentation from ShapeNet Core55 (2017) [Paper]. Dense 3D Reconstructions from a Single Image (2017) [Paper], Compact Model Representation for 3D Reconstruction (2017) [Paper], Image2Mesh: A Learning Framework for Single Image 3D Reconstruction (2017) [Paper], Learning free-form deformations for 3D object reconstruction (2018) [Paper], Variational Autoencoders for Deforming 3D Mesh Models(2018 CVPR) [Paper], Lions and Tigers and Bears: Capturing Non-Rigid, 3D, Articulated Shape from Images (2018 CVPR) [Paper], Model Composition from Interchangeable Components (2007) [Paper], Data-Driven Suggestions for Creativity Support in 3D Modeling (2010) [Paper], Photo-Inspired Model-Driven 3D Object Modeling (2011) [Paper], Probabilistic Reasoning for Assembly-Based 3D Modeling (2011) [Paper], A Probabilistic Model for Component-Based Shape Synthesis (2012) [Paper], Structure Recovery by Part Assembly (2012) [Paper], Fit and Diverse: Set Evolution for Inspiring 3D Shape Galleries (2012) [Paper], AttribIt: Content Creation with Semantic Attributes (2013) [Paper], Learning Part-based Templates from Large Collections of 3D Shapes (2013) [Paper], Topology-Varying 3D Shape Creation via Structural Blending (2014) [Paper], Estimating Image Depth using Shape Collections (2014) [Paper], Single-View Reconstruction via Joint Analysis of Image and Shape Collections (2015) [Paper], Interchangeable Components for Hands-On Assembly Based Modeling (2016) [Paper], Shape Completion from a Single RGBD Image (2016) [Paper], Learning to Generate Chairs, Tables and Cars with Convolutional Networks (2014) [Paper], Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis (2015, NIPS) [Paper], Analysis and synthesis of 3D shape families via deep-learned generative models of surfaces (2015) [Paper], Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis (2015) [Paper] [Code], Multi-view 3D Models from Single Images with a Convolutional Network (2016) [Paper] [Code], View Synthesis by Appearance Flow (2016) [Paper] [Code], Voxlets: Structured Prediction of Unobserved Voxels From a Single Depth Image (2016) [Paper] [Code], 3D-R2N2: 3D Recurrent Reconstruction Neural Network (2016) [Paper] [Code], Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision (2016) [Paper], TL-Embedding Network: Learning a Predictable and Generative Vector Representation for Objects (2016) [Paper], 3D GAN: Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling (2016) [Paper], 3D Shape Induction from 2D Views of Multiple Objects (2016) [Paper], Unsupervised Learning of 3D Structure from Images (2016) [Paper], Multi-view Supervision for Single-view Reconstruction via Differentiable Ray Consistency (2017) [Paper], Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes with Deep Generative Networks (2017) [Paper] [Code], Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis (2017) [Paper] [Code], Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs (2017) [Paper] [Code], Hierarchical Surface Prediction for 3D Object Reconstruction (2017) [Paper], OctNetFusion: Learning Depth Fusion from Data (2017) [Paper] [Code], A Point Set Generation Network for 3D Object Reconstruction from a Single Image (2017) [Paper] [Code], Learning Representations and Generative Models for 3D Point Clouds (2017) [Paper] [Code], Shape Generation using Spatially Partitioned Point Clouds (2017) [Paper], PCPNET Learning Local Shape Properties from Raw Point Clouds (2017) [Paper], Transformation-Grounded Image Generation Network for Novel 3D View Synthesis (2017) [Paper] [Code], Tag Disentangled Generative Adversarial Networks for Object Image Re-rendering (2017) [Paper], 3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks (2017) [Paper] [Code], Interactive 3D Modeling with a Generative Adversarial Network (2017) [Paper], Weakly supervised 3D Reconstruction with Adversarial Constraint (2017) [Paper] [Code], SurfNet: Generating 3D shape surfaces using deep residual networks (2017) [Paper], Learning to Reconstruct Symmetric Shapes using Planar Parameterization of 3D Surface (2019) [Paper] [Code], GRASS: Generative Recursive Autoencoders for Shape Structures (SIGGRAPH 2017) [Paper] [Code] [code], 3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks (2017) [Paper][code], Neural 3D Mesh Renderer (2017) [Paper] [Code], Pix2vox: Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks (2017) [Code], What You Sketch Is What You Get: 3D Sketching using Multi-View Deep Volumetric Prediction (2017) [Paper], MarrNet: 3D Shape Reconstruction via 2.5D Sketches (2017) [Paper], Learning a Multi-View Stereo Machine (2017 NIPS) [Paper], 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions (2017) [Paper], Scaling CNNs for High Resolution Volumetric Reconstruction from a Single Image (2017) [Paper], ComplementMe: Weakly-Supervised Component Suggestions for 3D Modeling (2017) [Paper], Learning Descriptor Networks for 3D Shape Synthesis and Analysis (2018 CVPR) [Project] [Paper] [Code]. Join the community with this link. All of the scenes are semantically annotated at the object level. 127915 3D CAD models from 662 categories We propose pointwise convolution that performs on-the-fly voxelization for learning local features of a point cloud. (D) We provide an interactive simulator (ViSim) to help for creating ground truth IMU, events, as well as monocular or stereo camera trajectories including hand-drawn, random walking and neural network based realistic trajectory. Mesh convolution decoders are combined with a specialized PCA model of the mouth, and smoothly blended based on geodesic distances, to create a compact model that is highly robust to noise. CoMA is a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. Three key open problems for point cloud object classification are identified, and a new point cloud classification neural network that achieves state-of-the-art performance on classifying objects with cluttered background is proposed. InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset [Link] This work introduce a dataset for geometric deep learning consisting of over 1 million individual (and high quality) geometric models, each associated with accurate ground truth information on the decomposition into patches, explicit sharp feature annotations, and analytic differential properties. The Space of Human Body Shapes: Reconstruction and Parameterization from Range Scans (2003) [Paper], SMPL-X: Expressive Body Capture: 3D Hands, Face, and Body from a Single Image (2019) [Paper][Video][Code], PIFuHD: Multi-Level Pixel Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020) [Paper][Video][Code], ExPose: Monocular Expressive Body Regression through Body-Driven Attention (2020) [Paper][Video][Code], DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image (2017) [Paper], Mesh-based Autoencoders for Localized Deformation Component Analysis (2017) [Paper], Exploring Generative 3D Shapes Using Autoencoder Networks (Autodesk 2017) [Paper], Using Locally Corresponding CAD Models for SceneNN (2016) [Link] Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer [Paper][Site][Code], Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning [Paper][Code], NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [Project][Paper][Code], GAMesh: Guided and Augmented Meshing for Deep Point Networks (3DV 2020) [Project] [Paper] [Code], Generative VoxelNet: Learning Energy-Based Models for 3D Shape Synthesis and Analysis (2020 TPAMI) [Paper]. Combinatorial 3D Shape Dataset is composed of 406 instances of 14 classes. All 3D objects are fully annotated with category labels. VOCASET, is a 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio. These models have been used in the real-world production. We have also created a Slack workplace for people around the globe to ask questions, share knowledge and facilitate collaborations. A Morphable Model For The Synthesis Of 3D Faces (1999) [Paper][Code]. Fusion 360 Gallery Dataset (2020) [Link][Paper] There are a total 120 scenes in version 1.0 of the THOR environment covering four different room categories: kitchens, living rooms, bedrooms, and bathrooms. 100+ indoor scene meshes with per-vertex and per-pixel annotation. Furthermore, we can sample valid random sequences from a given combinatorial shape after validating the sampled sequences. Texture Synthesis Using Convolutional Neural Networks (2015) [Paper], Two-Shot SVBRDF Capture for Stationary Materials (SIGGRAPH 2015) [Paper], Reflectance Modeling by Neural Texture Synthesis (2016) [Paper], Modeling Surface Appearance from a Single Photograph using Self-augmented Convolutional Neural Networks (2017) [Paper], High-Resolution Multi-Scale Neural Texture Synthesis (2017) [Paper], Reflectance and Natural Illumination from Single Material Specular Objects Using Deep Learning (2017) [Paper], Joint Material and Illumination Estimation from Photo Sets in the Wild (2017) [Paper], JWhat Is Around The Camera?