team name | team members | filename | flat cost | hie cost | description |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | flat_opt.txt | 0.28191 | 2.1144 | using sift and lbp feature with two non-linear coding representations and stochastic SVM, optimized for top-5 hit rate |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | hierarchical.txt | 0.28192 | 2.1107 | using sift and lbp feature with two non-linear coding representations and stochastic SVM, optimized for hierarchical cost |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | flat_rerank.txt | 0.28299 | 2.1258 | using sift and lbp feature with two non-linear coding representations and stochastic SVM, optimized for top-5 hit rate with re-rank |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | flat_avg.txt | 0.28782 | 2.164 | using sift and lbp feature with two non-linear coding representations and stochastic SVM |
XRCE | Jorge Sanchez, XRCE
Florent Perronnin, XRCE
Thomas Mensink, XRCE | xrce_res_18aug.txt | 0.33649 | 2.5553 | System based on the Fisher kernel framework as described in F. Perronnin, J. Sanchez and T. Mensink, \\\"Improving the Fisher kernel for Large-Scale Image Classification\\\", ECCV, 2010. |
XRCE | Jorge Sanchez, XRCE
Florent Perronnin, XRCE
Thomas Mensink, XRCE | xrce_res_18aug.txt | 0.33649 | 2.5553 | System based on the Fisher kernel framework as described in F. Perronnin, J. Sanchez and T. Mensink, \\\"Improving the Fisher kernel for Large-Scale Image Classification\\\", ECCV, 2010. |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | flat_single.txt | 0.34579 | 2.5986 | using sift and lbp feature with one non-linear coding representation and stochastic SVM |
Intelligent Systems and Informatics Lab., The University of Tokyo | Tatsuya Harada (The Univ. of Tokyo)
Hideki Nakayama (The Univ. of Tokyo)
Yoshitaka Ushiku (The Univ. of Tokyo)
Yuya Yamashita (The Univ. of Tokyo)
Jun Imura (The Univ. of Tokyo)
Yasuo Kuniyoshi (The Univ. of Tokyo)
| result_hie2_sift_ssim_surf_phog_rgb_hlac_ccd025_250.txt | 0.44558 | 3.6536 | --- Image features We use BoF (SIFT and Dense SURF), SSIM, PHOG, RGB Histogram, and HLAC. --- Label feature The label feature is a 1676 dimensional binary vector where the elements of the label featu |
Intelligent Systems and Informatics Lab., The University of Tokyo | Tatsuya Harada (The Univ. of Tokyo)
Hideki Nakayama (The Univ. of Tokyo)
Yoshitaka Ushiku (The Univ. of Tokyo)
Yuya Yamashita (The Univ. of Tokyo)
Jun Imura (The Univ. of Tokyo)
Yasuo Kuniyoshi (The Univ. of Tokyo)
| result_hie2_sift_ssim_surf_phog_rgb_hlac_ccd025_250.txt | 0.44558 | 3.6536 | --- Image features We use BoF (SIFT and Dense SURF), SSIM, PHOG, RGB Histogram, and HLAC. --- Label feature The label feature is a 1676 dimensional binary vector where the elements of the label featu |
UCI | Hamed Pirsiavash
Deva Ramanan
Charless Fowlkes | test400.pred.txt | 0.46624 | 3.6288 | dense sift + color features + grid based bag of features + approximated chi^2 kernel svm (In training, uses only the first 400 samples/class) |
UCI | Hamed Pirsiavash
Deva Ramanan
Charless Fowlkes | test400.pred.txt | 0.46624 | 3.6288 | dense sift + color features + grid based bag of features + approximated chi^2 kernel svm (In training, uses only the first 400 samples/class) |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | submit_sbow_gist_color_bestc101_cat_plus_sbow_liblinear.txt | 0.54433 | 4.4359 | all features concatenated and combined with sbow(liblinear) using PoE. Validation 0.76 |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | submit_sbow_gist_color_bestc101_cat_plus_sbow_liblinear.txt | 0.54433 | 4.4359 | all features concatenated and combined with sbow(liblinear) using PoE. Validation 0.76 |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | russ_test.txt | 0.57109 | 4.8581 | Scores combined using logistic regression (russ). Validation .7684 |
NTU_WZX | Zhengxiang Wang
CeMNet, SCE, NTU, Singapore
Liang-Tien Chia, Clement
CeMNet, SCE, NTU, Singapore | test_LI2C_Tr100_Triplet100.txt | 0.58313 | 4.9933 | Using the provided raw SIFT feature as input and use my LI2C algorithm for learning the weight associated with each feature (submitted to PR journal). |
NTU_WZX | Zhengxiang Wang
CeMNet, SCE, NTU, Singapore
Liang-Tien Chia, Clement
CeMNet, SCE, NTU, Singapore | test_LI2C_Tr100_Triplet100.txt | 0.58313 | 4.9933 | Using the provided raw SIFT feature as input and use my LI2C algorithm for learning the weight associated with each feature (submitted to PR journal). |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | submit_sbow_gist_color_bestc101_cat.txt | 0.58678 | 4.8035 | All features concatenated. Validation 0.8 |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | submit_sbow_gist_color_bestc101_poe.txt | 0.59085 | 4.9916 | scores combined using PoE. Validation 0.8 |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_color_texture_rerank.top | 0.60723 | 5.0544 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs and color histogram and Gabor texture, with re-ranking and optimized for the hierarchical measure |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_color_texture_rerank.top | 0.60723 | 5.0544 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs and color histogram and Gabor texture, with re-ranking and optimized for the flat measure |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_color_texture_hierarchical.top | 0.62325 | 5.3676 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs and color histogram and Gabor texture, optimized for the hierarchical measure |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_color_texture_flat.top | 0.62571 | 5.4182 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs and color histogram and Gabor texture, optimized for the flat measure |
LIG | Georges Quénot - LIG | LIG_knn_color_texture_flat.top | 0.69458 | 6.1162 | KNN with color histogram and Gabor texture, optimized for the flat measure |
LIG | Georges Quénot - LIG | LIG_knn_color_texture_hierarchical.top | 0.69843 | 6.0582 | KNN with color histogram and Gabor texture, optimized for the hierarchical measure |
ibm-ensemble | lexing xie, ibm research
hua ouyang, georgia tech
apostol natsev, ibm research | f18_0p9_k20.txt | 0.70093 | 6.0742 | fast knn ensemble 20 |
ibm-ensemble | lexing xie, ibm research
hua ouyang, georgia tech
apostol natsev, ibm research | f18_0p9_k15.txt | 0.71411 | 6.2223 | fast knn ensemble 15 |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | sbow_liblinear.txt | 0.73079 | 6.1884 | Baseline liblinear using 600k examples |
National Institute of Informatics | Cai-Zhi ZHU @ National Institute of Informatics, Tokyo,Japan
Xiao ZHOU @ Hefei Normal Univ. Heifei, China
Shin\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'ichi Satoh @ National Institute of Informatics, Tokyo, Japan | flat.caizhizhu.nii.txt | 0.74165 | 6.4535 | color lbp feature, flat class representation, canocial correlation analysis, random walk ranking |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_flat.top | 0.74425 | 6.9932 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs, optimized for the flat measure |
Regularities | Omid Madani, SRI International
Brian Burns, SRI International | labels.test.aggregation.inImageNet.txt | 0.75067 | 6.6738 | similar to file 1, but aggregation of several models, from different passes (so may give better accruacy withing 5) |
Regularities | Omid Madani, SRI International
Brian Burns, SRI International | labels.test.pass6.boost001marg004maxl400.inImageNet.txt | 0.75411 | 6.9017 | Results using a flat (linear) sparse model (online index learning, after 6 passes) with induced features (build on top of the provided 1000 discretized sift features) |
Regularities | Omid Madani, SRI International
Brian Burns, SRI International | labels.test.pass6.boost001marg004maxl400.inImageNet.txt | 0.75411 | 6.9017 | same as file 1 |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_hierarchical.top | 0.77154 | 6.9959 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs, optimized for the hierarchical measure |
National Institute of Informatics | Cai-Zhi ZHU @ National Institute of Informatics, Tokyo,Japan
Xiao ZHOU @ Hefei Normal Univ. Heifei, China
Shin\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'ichi Satoh @ National Institute of Informatics, Tokyo, Japan | cost.caizhizhu.nii.txt | 0.78659 | 6.8725 | color lbp feature, canocial correlation analysis, random walk ranking |
Regularities | Omid Madani, SRI International
Brian Burns, SRI International | labels.test.pass3.s4.inImageNet.txt | 0.80295 | 6.9414 | similar to fil1 except at pass 3 with smaller outdegree (fan out) and higher learning rate (so the performance should be inferior) |
ITNLP_HIT | Deyuan Zhang, HIT;
Wenfeng Xuan, HIT;
Xiaolong Wang, HIT;
Bingquan Liu, HIT;
Chengjie Sun, HIT;
| predict_label_2.txt | 0.9883 | 10.345 | resize image to 160 pixels keeping the aspect ratio, extract geometric blur and sift features, neural network reduce feature dimension, naive bayes with the concurrence of the descriptors |
ITNLP_HIT | Deyuan Zhang, HIT;
Wenfeng Xuan, HIT;
Xiaolong Wang, HIT;
Bingquan Liu, HIT;
Chengjie Sun, HIT;
| predict_label_2.txt | 0.9883 | 10.345 | resize image to 160 pixels keeping the aspect ratio, extract geometric blur and sift features, neural network reduce feature dimension, naive bayes with the concurrence of the descriptors |
team name | team members | filename | flat cost | hie cost | description |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | hierarchical.txt | 0.28192 | 2.1107 | using sift and lbp feature with two non-linear coding representations and stochastic SVM, optimized for hierarchical cost |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | flat_opt.txt | 0.28191 | 2.1144 | using sift and lbp feature with two non-linear coding representations and stochastic SVM, optimized for top-5 hit rate |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | flat_rerank.txt | 0.28299 | 2.1258 | using sift and lbp feature with two non-linear coding representations and stochastic SVM, optimized for top-5 hit rate with re-rank |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | flat_avg.txt | 0.28782 | 2.164 | using sift and lbp feature with two non-linear coding representations and stochastic SVM |
XRCE | Jorge Sanchez, XRCE
Florent Perronnin, XRCE
Thomas Mensink, XRCE | xrce_res_18aug.txt | 0.33649 | 2.5553 | System based on the Fisher kernel framework as described in F. Perronnin, J. Sanchez and T. Mensink, \\\"Improving the Fisher kernel for Large-Scale Image Classification\\\", ECCV, 2010. |
XRCE | Jorge Sanchez, XRCE
Florent Perronnin, XRCE
Thomas Mensink, XRCE | xrce_res_18aug.txt | 0.33649 | 2.5553 | System based on the Fisher kernel framework as described in F. Perronnin, J. Sanchez and T. Mensink, \\\"Improving the Fisher kernel for Large-Scale Image Classification\\\", ECCV, 2010. |
NEC-UIUC | NEC: Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu
UIUC: LiangLiang Cao, Zhen Li, Min-Hsuan Tsai, Xi Zhou, Thomas Huang
Rutgers: Tong Zhang | flat_single.txt | 0.34579 | 2.5986 | using sift and lbp feature with one non-linear coding representation and stochastic SVM |
UCI | Hamed Pirsiavash
Deva Ramanan
Charless Fowlkes | test400.pred.txt | 0.46624 | 3.6288 | dense sift + color features + grid based bag of features + approximated chi^2 kernel svm (In training, uses only the first 400 samples/class) |
UCI | Hamed Pirsiavash
Deva Ramanan
Charless Fowlkes | test400.pred.txt | 0.46624 | 3.6288 | dense sift + color features + grid based bag of features + approximated chi^2 kernel svm (In training, uses only the first 400 samples/class) |
Intelligent Systems and Informatics Lab., The University of Tokyo | Tatsuya Harada (The Univ. of Tokyo)
Hideki Nakayama (The Univ. of Tokyo)
Yoshitaka Ushiku (The Univ. of Tokyo)
Yuya Yamashita (The Univ. of Tokyo)
Jun Imura (The Univ. of Tokyo)
Yasuo Kuniyoshi (The Univ. of Tokyo)
| result_hie2_sift_ssim_surf_phog_rgb_hlac_ccd025_250.txt | 0.44558 | 3.6536 | --- Image features We use BoF (SIFT and Dense SURF), SSIM, PHOG, RGB Histogram, and HLAC. --- Label feature The label feature is a 1676 dimensional binary vector where the elements of the label featu |
Intelligent Systems and Informatics Lab., The University of Tokyo | Tatsuya Harada (The Univ. of Tokyo)
Hideki Nakayama (The Univ. of Tokyo)
Yoshitaka Ushiku (The Univ. of Tokyo)
Yuya Yamashita (The Univ. of Tokyo)
Jun Imura (The Univ. of Tokyo)
Yasuo Kuniyoshi (The Univ. of Tokyo)
| result_hie2_sift_ssim_surf_phog_rgb_hlac_ccd025_250.txt | 0.44558 | 3.6536 | --- Image features We use BoF (SIFT and Dense SURF), SSIM, PHOG, RGB Histogram, and HLAC. --- Label feature The label feature is a 1676 dimensional binary vector where the elements of the label featu |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | submit_sbow_gist_color_bestc101_cat_plus_sbow_liblinear.txt | 0.54433 | 4.4359 | all features concatenated and combined with sbow(liblinear) using PoE. Validation 0.76 |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | submit_sbow_gist_color_bestc101_cat_plus_sbow_liblinear.txt | 0.54433 | 4.4359 | all features concatenated and combined with sbow(liblinear) using PoE. Validation 0.76 |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | submit_sbow_gist_color_bestc101_cat.txt | 0.58678 | 4.8035 | All features concatenated. Validation 0.8 |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | russ_test.txt | 0.57109 | 4.8581 | Scores combined using logistic regression (russ). Validation .7684 |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | submit_sbow_gist_color_bestc101_poe.txt | 0.59085 | 4.9916 | scores combined using PoE. Validation 0.8 |
NTU_WZX | Zhengxiang Wang
CeMNet, SCE, NTU, Singapore
Liang-Tien Chia, Clement
CeMNet, SCE, NTU, Singapore | test_LI2C_Tr100_Triplet100.txt | 0.58313 | 4.9933 | Using the provided raw SIFT feature as input and use my LI2C algorithm for learning the weight associated with each feature (submitted to PR journal). |
NTU_WZX | Zhengxiang Wang
CeMNet, SCE, NTU, Singapore
Liang-Tien Chia, Clement
CeMNet, SCE, NTU, Singapore | test_LI2C_Tr100_Triplet100.txt | 0.58313 | 4.9933 | Using the provided raw SIFT feature as input and use my LI2C algorithm for learning the weight associated with each feature (submitted to PR journal). |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_color_texture_rerank.top | 0.60723 | 5.0544 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs and color histogram and Gabor texture, with re-ranking and optimized for the hierarchical measure |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_color_texture_rerank.top | 0.60723 | 5.0544 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs and color histogram and Gabor texture, with re-ranking and optimized for the flat measure |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_color_texture_hierarchical.top | 0.62325 | 5.3676 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs and color histogram and Gabor texture, optimized for the hierarchical measure |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_color_texture_flat.top | 0.62571 | 5.4182 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs and color histogram and Gabor texture, optimized for the flat measure |
LIG | Georges Quénot - LIG | LIG_knn_color_texture_hierarchical.top | 0.69843 | 6.0582 | KNN with color histogram and Gabor texture, optimized for the hierarchical measure |
ibm-ensemble | lexing xie, ibm research
hua ouyang, georgia tech
apostol natsev, ibm research | f18_0p9_k20.txt | 0.70093 | 6.0742 | fast knn ensemble 20 |
LIG | Georges Quénot - LIG | LIG_knn_color_texture_flat.top | 0.69458 | 6.1162 | KNN with color histogram and Gabor texture, optimized for the flat measure |
hminmax | Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang. MIT | sbow_liblinear.txt | 0.73079 | 6.1884 | Baseline liblinear using 600k examples |
ibm-ensemble | lexing xie, ibm research
hua ouyang, georgia tech
apostol natsev, ibm research | f18_0p9_k15.txt | 0.71411 | 6.2223 | fast knn ensemble 15 |
National Institute of Informatics | Cai-Zhi ZHU @ National Institute of Informatics, Tokyo,Japan
Xiao ZHOU @ Hefei Normal Univ. Heifei, China
Shin\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'ichi Satoh @ National Institute of Informatics, Tokyo, Japan | flat.caizhizhu.nii.txt | 0.74165 | 6.4535 | color lbp feature, flat class representation, canocial correlation analysis, random walk ranking |
Regularities | Omid Madani, SRI International
Brian Burns, SRI International | labels.test.aggregation.inImageNet.txt | 0.75067 | 6.6738 | similar to file 1, but aggregation of several models, from different passes (so may give better accruacy withing 5) |
National Institute of Informatics | Cai-Zhi ZHU @ National Institute of Informatics, Tokyo,Japan
Xiao ZHOU @ Hefei Normal Univ. Heifei, China
Shin\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'ichi Satoh @ National Institute of Informatics, Tokyo, Japan | cost.caizhizhu.nii.txt | 0.78659 | 6.8725 | color lbp feature, canocial correlation analysis, random walk ranking |
Regularities | Omid Madani, SRI International
Brian Burns, SRI International | labels.test.pass6.boost001marg004maxl400.inImageNet.txt | 0.75411 | 6.9017 | Results using a flat (linear) sparse model (online index learning, after 6 passes) with induced features (build on top of the provided 1000 discretized sift features) |
Regularities | Omid Madani, SRI International
Brian Burns, SRI International | labels.test.pass6.boost001marg004maxl400.inImageNet.txt | 0.75411 | 6.9017 | same as file 1 |
Regularities | Omid Madani, SRI International
Brian Burns, SRI International | labels.test.pass3.s4.inImageNet.txt | 0.80295 | 6.9414 | similar to fil1 except at pass 3 with smaller outdegree (fan out) and higher learning rate (so the performance should be inferior) |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_flat.top | 0.74425 | 6.9932 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs, optimized for the flat measure |
LIG | Georges Quénot - LIG | LIG_knn_opp_sift_hierarchical.top | 0.77154 | 6.9959 | Fusion of KNNs with dense and Harris-Laplace filtered opponent SIFTs, optimized for the hierarchical measure |
ITNLP_HIT | Deyuan Zhang, HIT;
Wenfeng Xuan, HIT;
Xiaolong Wang, HIT;
Bingquan Liu, HIT;
Chengjie Sun, HIT;
| predict_label_2.txt | 0.9883 | 10.345 | resize image to 160 pixels keeping the aspect ratio, extract geometric blur and sift features, neural network reduce feature dimension, naive bayes with the concurrence of the descriptors |
ITNLP_HIT | Deyuan Zhang, HIT;
Wenfeng Xuan, HIT;
Xiaolong Wang, HIT;
Bingquan Liu, HIT;
Chengjie Sun, HIT;
| predict_label_2.txt | 0.9883 | 10.345 | resize image to 160 pixels keeping the aspect ratio, extract geometric blur and sift features, neural network reduce feature dimension, naive bayes with the concurrence of the descriptors |