CONTI, FRANCESCO
 Distribuzione geografica
Continente #
NA - Nord America 5.577
EU - Europa 3.159
AS - Asia 1.475
AF - Africa 138
SA - Sud America 19
OC - Oceania 10
Continente sconosciuto - Info sul continente non disponibili 3
Totale 10.381
Nazione #
US - Stati Uniti d'America 5.426
IT - Italia 1.046
GB - Regno Unito 627
CN - Cina 492
DE - Germania 478
SG - Singapore 467
SE - Svezia 254
IN - India 157
CA - Canada 150
VN - Vietnam 143
IE - Irlanda 119
FR - Francia 118
FI - Finlandia 73
CH - Svizzera 71
CI - Costa d'Avorio 71
BG - Bulgaria 66
RU - Federazione Russa 62
NL - Olanda 49
ES - Italia 48
ID - Indonesia 34
ZA - Sudafrica 34
HK - Hong Kong 32
JP - Giappone 32
UA - Ucraina 30
EE - Estonia 29
JO - Giordania 26
TR - Turchia 26
BE - Belgio 22
TG - Togo 21
GR - Grecia 17
PH - Filippine 15
AT - Austria 13
BR - Brasile 13
IR - Iran 10
AU - Australia 8
HR - Croazia 8
PL - Polonia 8
TW - Taiwan 8
KR - Corea 7
LB - Libano 7
MY - Malesia 7
DZ - Algeria 6
IL - Israele 4
RO - Romania 4
DK - Danimarca 3
LU - Lussemburgo 3
SI - Slovenia 3
A2 - ???statistics.table.value.countryCode.A2??? 2
AR - Argentina 2
CL - Cile 2
CZ - Repubblica Ceca 2
EG - Egitto 2
NO - Norvegia 2
NZ - Nuova Zelanda 2
UZ - Uzbekistan 2
AE - Emirati Arabi Uniti 1
CO - Colombia 1
CY - Cipro 1
DO - Repubblica Dominicana 1
EU - Europa 1
HU - Ungheria 1
KE - Kenya 1
KZ - Kazakistan 1
LA - Repubblica Popolare Democratica del Laos 1
LK - Sri Lanka 1
LT - Lituania 1
LV - Lettonia 1
MA - Marocco 1
MU - Mauritius 1
NG - Nigeria 1
PE - Perù 1
SK - Slovacchia (Repubblica Slovacca) 1
TH - Thailandia 1
Totale 10.381
Città #
Ann Arbor 1.797
Southend 488
Chandler 419
Singapore 417
Fairfield 357
Santa Clara 302
Bologna 247
Ashburn 228
Wilmington 184
Woodbridge 177
Munich 147
Seattle 145
Princeton 132
Houston 125
Montréal 123
Dublin 119
Cambridge 118
Boardman 108
Mcallen 103
Turin 92
Abidjan 71
Milan 70
Sofia 66
Helsinki 61
Dong Ket 51
Redmond 48
Medford 47
Berlin 46
Westminster 41
Nanjing 39
Rome 38
Jakarta 33
San Diego 29
Amman 26
Dearborn 26
Florence 26
New York 26
Los Angeles 25
Jinan 24
Redwood City 24
Falls Church 22
Padova 22
Shanghai 22
Lomé 21
Chicago 20
Guangzhou 20
Sant Esteve de Palautordera 20
Zurich 20
Shenyang 19
Istanbul 18
Norwalk 18
Beijing 17
Ravenna 17
Bern 15
Brussels 15
Bühl 15
Frankfurt am Main 15
Nuremberg 15
Saint Petersburg 15
Hong Kong 14
Bagnacavallo 13
Changsha 13
Des Moines 13
Hangzhou 13
Modena 13
Tianjin 13
Boydton 12
Duncan 12
Nanchang 12
Paris 12
San Lazzaro di Savena 12
Tokyo 12
Brandenburg 11
Cagliari 11
Castel Maggiore 11
Hebei 11
Washington 11
Zhengzhou 11
Davao City 10
Phoenix 10
Wuhan 10
Fremont 9
Jiaxing 9
Toronto 9
Verona 9
Amsterdam 8
Forlì 8
Johannesburg 8
London 8
Ottawa 8
Tappahannock 8
Cesena 7
Haikou 7
Kansas City 7
Kolkata 7
Leawood 7
Ningbo 7
Taiyuan 7
Trento 7
Xi'an 7
Totale 7.398
Nome #
Enabling the Heterogeneous Accelerator Model on Ultra-Low Power Microcontroller Platforms 583
PULP: A parallel ultra low power platform for next generation IoT applications 276
A mixed-precision RISC-V processor for extreme-edge DNN inference 242
4.4 A 1.3TOPS/W @ 32GOPS Fully Integrated 10-Core SoC for IoT End-Nodes with 1.7μW Cognitive Wake-Up from MRAM-Based State-Retentive Sleep Mode 220
Slow and steady wins the race? A comparison of ultra-low-power RISC-V cores for Internet-of-Things applications 207
Accelerated Visual Context Classification on a Low-Power Smartwatch 206
Robust Real-Time Embedded EMG Recognition Framework Using Temporal Convolutional Networks on a Multicore IoT Processor 197
Work-in-Progress: Quantized NNs as the Definitive solution for inference on low-power ARM MCUs? 196
XpulpNN: Accelerating Quantized Neural Networks on RISC-V Processors Through ISA Extensions 193
An IoT Endpoint System-on-Chip for Secure and Energy-Efficient Near-Sensor Analytics 188
Energy efficient parallel computing on the PULP platform with support for OpenMP 181
PULP-NN: Accelerating Quantized Neural Networks on Parallel Ultra-Low-Power RISC-V Processors 175
A Heterogeneous Multi-Core System-on-Chip for Energy Efficient Brain Inspired Computing 175
Enabling mixed-precision quantized neural networks in extreme-edge devices 172
Brain-inspired classroom occupancy monitoring on a low-power mobile platform 167
Chipmunk: A systolically scalable 0.9 mm2, 3.08Gop/s/mW @ 1.2 mW accelerator for near-sensor recurrent neural network inference 161
A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones 160
An Open Source and Open Hardware Deep Learning-Powered Visual Navigation Engine for Autonomous Nano-UAVs 156
On-the-fly adaptivity for process networks over shared-memory platforms 154
NEURAghe: Exploiting CPU-FPGA synergies for efficient and flexible CNN inference acceleration on zynQ SoCs 152
Memory-Latency-Accuracy Trade-Offs for Continual Learning on a RISC-V Extreme-Edge Node 152
A heterogeneous multi-core system-on-chip for energy efficient brain inspired vision 150
Curbing the roofline: a scalable and flexible architecture for CNNs on FPGA 150
Energy-efficient vision on the PULP platform for ultra-low power parallel computing 149
PULP: A Ultra-Low Power Parallel Accelerator for Energy-Efficient and Flexible Embedded Vision 144
Tightly-coupled hardware support to dynamic parallelism acceleration in embedded shared memory clusters 144
PULP-NN: A computing library for quantized neural network inference at the edge on RISC-V based parallel ultra low power clusters 143
Always-On 674μW@4GOP/s Error Resilient Binary Neural Networks With Aggressive SRAM Voltage Scaling on a 22-nm IoT End-Node 143
Work-in-progress: Dory: Lightweight memory hierarchy management for deep NN inference on iot endnodes 142
A Self-Aware Architecture for PVT Compensation and Power Nap in Near Threshold Processors 141
Multi-core data analytics SoC with a flexible 1.76 Gbit/s AES-XTS cryptographic accelerator in 65 nm CMOS 138
He-P2012: Architectural heterogeneity exploration on a scalable many-core platform 137
Thermal Image-based CNN's for Ultra-low Power People Recognition 136
GAP-8: A RISC-V SoC for AI at the Edge of the IoT 135
An Ultra-Low Power Address-Event Sensor Interface for Energy-Proportional Time-To-Information Extraction 134
XNOR Neural Engine: A Hardware Accelerator IP for 21.6-fJ/op Binary Neural Network Inference 134
Exploring NEURAGHE: A Customizable Template for APSoC-based CNN Inference at the Edge 130
A Ultra-Low-Energy Convolution Engine for Fast Brain-Inspired Vision in Multicore Clusters 129
PULP-TrainLib: Enabling On-Device Training for RISC-V Multi-core MCUs Through Performance-Driven Autotuning 126
Synthesis-friendly techniques for tightly-coupled integration of hardware accelerators into shared-memory multi-core clusters 125
ALOHA: An Architectural-aware Framework for Deep Learning at the Edge 121
He-P2012: Performance and Energy Exploration of Architecturally Heterogeneous Many-Cores 120
A Microcontroller is All You Need: Enabling Transformer Execution on Low-Power IoT Endnodes 120
GVSoC: A Highly Configurable, Fast and Accurate Full-Platform Simulator for RISC-V based IoT Processors 119
Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs 117
XpulpNN: Enabling Energy Efficient and Flexible Inference of Quantized Neural Networks on RISC-V Based IoT End Nodes 113
Siracusa: A Low-Power On-Sensor RISC-V SoC for Extended Reality Visual Processing in 16nm CMOS 113
Online process transformation for polyhedral process networks in shared-memory MPSoCs 111
A 1.15 TOPS/W, 16-Cores Parallel Ultra-Low Power Cluster with 2b-to-32b Fully Flexible Bit-Precision and Vector Lockstep Execution Mode 111
Optimization and deployment of CNNs at the Edge: The ALOHA experience 110
Temporal Variability Analysis in sEMG Hand Grasp Recognition using Temporal Convolutional Networks 108
Quentin: an ultra-low-power PULPissimo SoC in 22nm FDX 106
DORY: Automatic End-to-End Deployment of Real-World DNNs on Low-Cost IoT MCUs 104
Vega: A Ten-Core SoC for IoT Endnodes with DNN Acceleration and Cognitive Wake-Up from MRAM-Based State-Retentive Sleep Mode 99
Pushing On-chip Memories Beyond Reliability Boundaries in Micropower Machine Learning Applications 97
End-To-end 100-TOPS/W Inference with Analog In-Memory Computing: Are We There Yet? 96
Improving Autonomous Nano-Drones Performance via Automated End-to-End Optimization and Deployment of DNNs 96
A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays 87
A Heterogeneous In-Memory Computing Cluster for Flexible End-to-End Inference of Real-World Deep Neural Networks 86
Pruning in Time (PIT): A Lightweight Network Architecture Optimizer for Temporal Convolutional Networks 81
Dustin: A 16-Cores Parallel Ultra-Low-Power Cluster With 2b-to-32b Fully Flexible Bit-Precision and Vector Lockstep Execution Mode 78
AI-Powered Collision Avoidance Safety System for Industrial Woodworking Machinery 69
An Extreme-Edge TCN-Based Low-Latency Collision-Avoidance Safety System for Industrial Machinery 68
TCN Mapping Optimization for Ultra-Low Power Time-Series Edge Inference 67
Architecting more than Moore: wireless plasticity for massive heterogeneous computer architectures (WiPLASH) 66
Vau Da Muntanialas: Energy-Efficient Multi-Die Scalable Acceleration of RNN Inference 62
A Multi-Precision Bit-Serial Hardware Accelerator IP for Deep Learning Enabled Internet-of-Things 61
Darkside: A Heterogeneous RISC-V Compute Cluster for Extreme-Edge On-Chip DNN Inference and Training 54
SNE: an Energy-Proportional Digital Accelerator for Sparse Event-Based Convolutions 53
Scale up your In-Memory Accelerator: Leveraging Wireless-on-Chip Communication for AIMC-based CNN Inference 52
Fünfiiber-Drone: A Modular Open-Platform 18-grams Autonomous Nano-Drone 51
Motor-Unit Ordering of Blindly-Separated Surface-EMG Signals for Gesture Recognition 50
Darkside: 2.6GFLOPS, 8.7mW Heterogeneous RISC-V Cluster for Extreme-Edge On-Chip DNN Inference and Training 49
To buffer, or not to buffer? A case study on FFT accelerators for ultra-low-power multicore clusters 49
A RISC-V-based FPGA Overlay to Simplify Embedded Accelerator Deployment 49
RNN-Based Radio Resource Management on Multicore RISC-V Accelerator Architectures 48
Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge 43
A Sim-to-Real Deep Learning-based Framework for Autonomous Nano-drone Racing 42
End-to-End DNN Inference on a Massively Parallel Analog In Memory Computing Architecture 42
A 3 TOPS/W RISC-V Parallel Cluster for Inference of Fine-Grain Mixed-Precision Quantized Neural Networks 42
sEMG Neural Spikes Reconstruction for Gesture Recognition on a Low-Power Multicore Processor. In: Biomedical Circuits and Systems 42
Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC With 2???8 b DNN Acceleration and 30%-Boost Adaptive Body Biasing 40
RISC-V Processor Technologies for Aerospace Applications in the ISOLDE Project 34
RedMulE: A Compact FP16 Matrix-Multiplication Accelerator for Adaptive Deep Learning on RISC-V-Based Ultra-Low-Power SoCs 34
TCN Mapping Optimization for Ultra-Low Power Time-Series Edge Inference 32
ECHOES: a 200 GOPS/W Frequency Domain SoC with FFT Processor and I2S DSP for Flexible Data Acquisition from Microphone Arrays 32
Graphene-based Wireless Agile Interconnects for Massive Heterogeneous Multi-chip Processors 31
Graphene-based Wireless Agile Interconnects for Massive Heterogeneous Multi-chip Processors 28
Specialization meets Flexibility: a Heterogeneous Architecture for High-Efficiency, High-flexibility AR/VR Processing 28
RedMule: A mixed-precision matrix–matrix operation engine for flexible and energy-efficient on-chip linear algebra and TinyML training acceleration 28
HTVM: Efficient Neural Network Deployment On Heterogeneous TinyML Platforms 28
22.1 A 12.4TOPS/W @ 136GOPS AI-IoT System-on-Chip with 16 RISC-V, 2-to-8b Precision-Scalable DNN Acceleration and 30%-Boost Adaptive Body Biasing 27
Siracusa: A 16 nm Heterogenous RISC-V SoC for Extended Reality With At-MRAM Neural Engine 26
Hybrid Modular Redundancy: Exploring Modular Redundancy Approaches in RISC-V Multi-Core Computing Clusters for Reliable Processing in Space 26
ViT-LR: Pushing the Envelope for Transformer-Based On-Device Embedded Continual Learning 24
PULP Fiction No More-Dependable PULP Systems for Space 22
WIP: Automatic DNN Deployment on Heterogeneous Platforms: The GAP9 Case Study 21
Free Bits: Latency Optimization of Mixed-Precision Quantized Neural Networks on the Edge 17
Reduced precision floating-point optimization for Deep Neural Network On-Device Learning on microcontrollers 17
Compressed Latent Replays for Lightweight Continual Learning on Spiking Neural Networks 8
Totale 10.698
Categoria #
all - tutte 28.144
article - articoli 0
book - libri 0
conference - conferenze 0
curatela - curatele 0
other - altro 0
patent - brevetti 0
selected - selezionate 0
volume - volumi 0
Totale 28.144


Totale Lug Ago Sett Ott Nov Dic Gen Feb Mar Apr Mag Giu
2019/20201.004 0 0 0 0 0 155 196 165 182 112 89 105
2020/2021974 168 50 31 39 48 64 47 47 116 102 45 217
2021/20223.228 128 37 304 270 338 249 336 392 411 141 357 265
2022/20231.817 213 224 97 172 148 165 45 99 302 63 208 81
2023/2024918 31 98 66 60 103 116 102 52 30 96 79 85
2024/20251.764 143 393 349 227 469 183 0 0 0 0 0 0
Totale 10.706